ESMA publishes responses to the Discussion Paper on validation and review of CRAs’ methodologies

21 March 2016

ESMA published the responses received to its Discussion Paper from A.M. Best Europe, Fitch, Moody's and the EIB, among others.

A.M. Best Europe

A.M. Best agrees with ESMA’s view that the discriminatory power of a methodology relates to its ability to rank order the rated entities in accordance to their future status at some predefined time horizon.

A.M. Best understands that the Accuracy Ratio helps to demonstrate the discriminatory power of methodologies. However, A.M. Best does not agree that the Accuracy Ration is the minimum statistical measure that a CRA should use as part of its validation processes. A.M. Best believes the Accuracy Ratio should only be applicable to certain types of ratings that have pre-set default probabilities and sufficient historical performance data.

A.M. Best agrees with ESMA’s view that the predictive power of a methodology can be demonstrated by comparing the expected behaviour of the ratings assigned from the methodology to the observed results. However, A.M. Best believes that not all CRAs should define their expectations by rating category with regards to the measure of creditworthiness to which its ratings refer. The ratings issued by A.M. Best indicate the relative degree of credit risk of an obligor or debt instrument rather than reflect a specific default probability.

Full response

 

Euler Hermes Rating GmbH

Euler Hermes emphasises that it is highly crucial to decide on the application of such measures with respect to their explanatory power and the relevant and available amount of data.

As Accuracy Ratio (AR) and Area Under ROC (AUROC) are mathematically equivalent (using affine linear transformation), the ROC curve and the derived AUROC value do not add any material information to the discriminatory power. Additionally, AR and ROC themselves are random variables dependent on the sam-ple, especially on the number of defaults and non-defaults in a certain period. Euler Hermes expects confidence intervals for AR/AUROC to add further information on discriminatory power.

Euler Hermer considers the application of predictive power measures as systematically contradictory to the rank-order system underlying credit ratings as those are not supposed to provide fixed default probabilities. This might also result in a change of credit ratings as a through-the-cycle measure to a point-in-time measure (equiv. to bank-internal ratings), which might lead to higher rating transitions over time.

Full response

 

Fitch Ratings

Policymakers foresaw a possibility that regulators could encroach on CRA’s methodologies when implementing this legislation. To prevent this from happening, policymakers explicitly state in the remainder of the paragraph that ‘such a requirement should not, however, provide grounds for interference with the content of credit ratings and methodologies by the competent authorities and the Member States.’ ESMA’s definition of predictive power changes the definition of a rating and requires changes in methodologies. Fitch sees this as interfering with the content and meaning of credit ratings. Whilst Fitch thinks this is unlikely to be ESMA’s intention, it would be a consequence.

Fitch’s existing ratings definitions explicitly state that “credit ratings express risk in relative rank order, which is to say they are ordinal measures of credit risk and are not predictive of a specific frequency of default or loss”. Assigning ‘expectations (absolute numbers of ranges) per rating category’ would be in conflict with this definition and so would require a change to our product.

Fitch’s existing criteria assess the vulnerability of a transaction or entity to credit risk, but does not assign a probability of that event happening. To do this Fitch would need to switch to a cardinal scale of default, which would be inherently more volatile. Transactions and entities would have to be upgraded or downgraded based on the probability of an event happening rather than their vulnerability to that event.

Full response

 

Moody's Investors Service Ltd.

The Discussion Paper is based on a premise that credit ratings are precise, absolute and unqualified measures of credit risk.  Credit ratings are forecasts, qualified by a number of assumptions about an uncertain future and rank ordered on a comparative scale.

Considering factors such as small data samples, different time horizons and evolving credit conditions, a number of the statistical tests suggested by the Discussion Paper would add little value, produce misleading outcomes, and increase ratings volatility. 

The proposed measures, when taken together, signal a level of discomfort with the innate uncertain, subjective nature of the credit ratings system.  Credit rating systems are less scientific in nature than many would like to assume.  A common misperception exists that credit ratings are binary – i.e., “pass-fail” or “high-low” – perhaps because bonds ultimately behave in a binary manner: that is, either they default or they do not default.  At the time that MIS forms its credit opinion about any given bond, however, it is not yet known whether and to what extent the bond will perform or default.  It is simply not possible to predict the future with absolute precision.  For that reason, MIS has developed a non-binary rating system that reflects our view of the relative future credit risk of issuers and financial obligations; and our rating scale is best understood as a relative scale, ordinal in nature. 

More pointedly, no amount of validation, review and back-testing will change the fundamental essence of credit ratings or the nature of the credit rating process.  ESMA has recognised that the determination of credit ratings involves subjective judgment.  To that end, in assigning the credit opinions, analysts adhere to MIS published credit rating methodologies, but bring to bear their personal perspective informed by their experience in, and understanding of, the sector.  That is to say, Moody’s methodologies are not computer codes and the resulting credit ratings are not formula-driven.  An assessment process that treats methodologies as codes is ill-suited for credit ratings, will provide skewed results and will inadvertently foster the misperception of credit ratings.

Full response

 

European Investment Bank

EIB supports the view that validation and review of CRA rating methodologies should be supported by quantitative evidence.

It believes that disclosure of the validation process performed by a CRA and its summarised results are as important as the process itself. To enhance transparency, CRAs should at minimum publicly disclose the validation process undertaken, the techniques used with brief justifications, in case quantitative techniques are not feasible the alternatives used and the reasons why, and summary results of the validation process per rating model. Moreover, CRAs should clearly disclose any revisions to their methodologies as a result of their validation process.

CRAs should demonstrate sufficiently that their methodologies result in a robust ranking of rated entities.

CRAs should be able to demonstrate that their ratings can be mapped to relatively narrow PD ranges which do not fluctuate significantly over time.

Full response

 

Full discussion paper


© ESMA