What Can We Expect From a Good Margin Model?

By: David Murphy, Visiting Professor, Department of Law, London School of Economics May 2022


Initial margin has always been important for cleared derivatives, and its significance has grown since 2013 when regulators mandated the exchange of initial and variation margin for many bilateral OTC derivatives too.

Initial margin requirements for portfolios of derivatives – and many other kinds of exposure – are often estimated using a risk-based margin model. These models typically rely on the assumption that portfolios can be liquidated over some fixed time period, known as the margin period of risk. The model is designed to estimate how much a portfolio could change in value over this period to some degree of confidence, often 99% or 99.5%. But how would we know whether its estimates are good?

The conventional approach is to look at the occasions when portfolio losses exceed margin estimates over an historical period. `Backtests', as these tests are known, examine the frequency of these ‘exceedances’: they should occur neither too often, nor too infrequently. A model designed to calculate the 99th percentile of value changes, for instance, should show an exceedance one day in 100 on average. In practice, we might see two or three exceedances in this period, or zero; but if we see 10, then something is likely to be wrong.

More information on model performance is available by considering the size of these exceedances, or the extent to which they cluster together. However, even these more sophisticated backtests do not distinguish between wide ranges of models, simply because exceedances are usually rare. A range of models which have quite different margin estimates can often all pass backtesting.

Whole distribution tests

This means that it is helpful to have additional tests of margin model performance. Many margin models, including the most popular ones, provide estimates of the whole distribution of portfolio returns one or more days forward. The margin requirement is set as a percentile of this distribution, but the model calculates the whole thing. In a recent paper, I showed how to use these estimates of the distribution of portfolio returns to test initial margin models. This approach is often more powerful than backtesting: it can give significant insights into model performance.

In one sense, it is unfair to test a model designed to estimate the 99th percentile of the distribution of returns on a portfolio on how well it does at estimating the whole distribution. But one could equally well ask why we are comfortable that the 99th percentile is right if much of the rest of the distribution is way off. A whole distribution test can prompt useful questions.

One area where our test can be particularly useful is model calibration. To see the issue here, consider a popular tool used in margin models, the exponentially weighted moving average or EWMA. This is used to estimate the current level of volatility. It takes as an input a time series of portfolio returns, and a parameter, called lambda. Lambda controls how much recent returns are weighted over older returns in volatility estimation: if it is close to one, the calculation has a long memory, so older returns have a bigger effect on the estimate, while if it is lower, more recent returns have a bigger impact. How should we pick the lambda parameter?

Backtesting is often not much help – in many cases, models with quite different lambdas can pass backtesting. But whole distribution tests are more insightful. We can compare the distribution estimates produced by models with different lambdas to the actual observed distribution of returns and reject those lambdas that produce poor estimates. Because we have a return to work with every day – rather than just on the days when there is an exceedance – this test is more powerful than simple backtesting. It can often isolate a small range of acceptable lambdas which produce good estimates of the daily return distribution.

Results: familiar wisdom and a surprise

Many of the results of applying the suggested whole distribution test confirm results that are well-known to margin model designers. Parametric value at risk models, which use the normal distribution with a width given by some volatility estimator, perform badly no matter what estimator we use. Historical simulation value at risk, which uses actual returns, does better, as does the current industry standard model, filtered historical simulation (FHS). This last class of model uses an EWMA volatility estimate to ‘rescale’ past returns, and the paper shows how to find an acceptable range of lambdas for use in it.

There are less obvious results, too. For some classes of model, we find that no parameterisation works. This is even true for FHS in some cases. The culprit can often be found in the far tails of the return distribution. The models – where by ‘model’ we mean both the margin model algorithm and the parameters, such as lambda, used to calibrate it – are working well for most of the return distribution, usually out well past the margin threshold. But beyond 99.7 or 99.8%, they start to fail. This means that they are doing a good job at estimating the return distribution at the percentiles often used for initial margin, but they are not capturing unusually large negative returns very well. Margin models cannot be expected to work well in the far tail.

If we zoom in on a period of high stress, such as that of the early Covid period in March 2020, we find some further interesting effects. A range of initial margin models which are all acceptable using our test – and thus all of which give similar margin estimates most of the time – give quite different margin estimates in this period. This is because they react quite differently to intense stress: some increase margin rather quickly, while others do so rather more slowly. Neither backtesting nor our whole distribution test can say which kind of response is ‘right’. This suggests that there might be freedom to use other criteria to select a model, such as lower procyclicality.



Disclaimer:

The views, thoughts and opinions contained in this Focus article belong solely to the author and do not necessarily reflect the WFE’s policy position on the issue, or the WFE’s views or opinions.