Credit where credit is due: calculating credit-to-GDP gaps

In the wake of the Great Financial Crisis, the Basel Committee on Banking Supervision (BCBS) argued for counter-cyclical capital buffers and published operational guidelines (BCBS 2010) for regulators.  The cornerstone of the BCBS proposal was the use of cyclical movements in the credit-to-GDP ratio gap to trigger increases in the capital required to be held by banks.[1]  The Basel Committee recommendations argued that credit-to-GDP ratio gaps (credit gaps herein) were useful, if imperfect, leading indicators of banking crises.

The term “gap” signifies a deviation of an observable variable from where it should be in “normal” times, what we refer to as the “trend”. where we leave the definition of normal unspecified for the time being. Calculating such gaps requires two underlying concepts. The first is the data and the second is the method for calculating, or estimating, the trend and therefore the gaps. The BCBS is quite clear on both fronts. The new regulatory framework under Basel III requires national authorities to “…monitor credit growth and other indicators that may signal a build up of system-wide risk (BCBS 2011)”. More specifically, Basel III suggests the use of data on total credit, capturing not only bank credit but all sources of credit, including bonds and cross-border finance, to the private non-financial sector.

Turning to the estimation method, credit gaps are estimated with a one-sided Hodrick-Prescott (HP) filter applied to the credit-to-GDP ratio with a smoothing parameter of 400,000.  Borio and Lowe (2004), who first proposed and used the smoothing parameter of 400,000, argued that “it meant that the trends are smoothed considerably more than normal, better capturing low-frequency, cumulative deviations, and implicitly putting more weight on the mean-reversion tendency of the processes”.  In a recent SEACEN Centre Working Paper we investigated some elements of this filter.

Any (two-sided) filter can be characterised as a weighted average of the observed data (past, present and future), which we believe is a very intuitive way of investigating the estimated trends or gaps.  For example, with the weights to hand one can evaluate how sensitive the estimated trends or gaps are to extreme values, outliers and so on.  We use the weighted average representation of the filter proposed by the BCBS and find some undesirable properties of this filter.  We identify two issues.

First, a higher smoothing parameter means that past information receives more weight in determining the current estimate of the trend.  For example, the latest available observation receives a 6.1% weight, while observations from 10, 20 and 30 quarters ago receive weights of 4.4%, 2.8% and 1.5% respectively.  In other words, the distant past still imparts a considerable influence on the estimates of the current level of the trend and hence the credit gap.

Second, the weighting of the past data dies out significantly slower relative to lower smoothing parameters.  With a smoothing parameter of 400,000, it takes up to 40 quarters for a particular observation to drop out of the trend or gap estimates.  This slow decline of past information in weighted averaging, combined with flatter (more equal) weights attached to them, mean that the trend estimates are very smooth, and the resulting gaps are almost non-stationary or, in other words, very persistent.

These two features of the filter cause some undesirable properties for the trend estimates: for example, if the actual data are driven by excess credit growth in a particular period, estimates of the trend should not be influenced by these data.  This is particularly problematic where there are clear regime changes.

We use financial crises where credit conditions changed dramatically to our laboratory to illustrate thees two issues about the filter.  More specifically, we use narrative evidence from Argentina, Malaysia, Indonesia and the United States which experienced financial crises, and we show that the effects of “excessively” high past credit-to-GDP data influence the trend estimates in post-crisis periods.  Even though the credit conditions changed dramatically following their crises, we find that the effects of old data influence the trend estimates even after 10 years, which in theory should not be the case given crises are clear structural breaks in trends. Drehmann and Tsatsaronis (2014), using simulations rather than the actual filter weights, make a similar point and suggest that structural breaks take 20 years to fully disappear.

The simple point we are making in our analysis is not an econometric or a filtering one, it is an economic one.  What we are showing is that the trend estimates use information from the distant past heavily.  We argue that these findings indicate that caution should be taken when using these gaps mechanically in policy making.


[1] Our colleague Victor Pontines has already illustrated the sensitivity of the counter-cyclical capital buffer to alternative measures of credit on the one hand and alternative methods of obtaining the credit gap on the other in a 2018 SEACEN Staff Policy Analysis publication.

+ posts

Ole is the Director of the Macroeconomic and Monetary Policy Management Pillar at The SEACEN Centre.