Category / VAR

Fundamental review of the trading book news October 31, 2013 at 1:29 pm

The second consultative document on the fundamental review of the trading book is out: you can read it here. A few highlights:

  • “The Committee remains sceptical that existing internal models-based risk measurement methodologies used by banks can adequately capture the risks associated with securitised products. As a result, capital charges for securitisation positions in the trading book – including correlation trading activities – will be based on the revised standardised approach”. RIP CRM models.
  • “the Committee has decided that joint modelling of the discrete (default risk) and continuous (spread risk) components of credit risk is likely to involve particular practical challenges… As a result, the Committee has agreed that non-securitisation credit positions in the trading book will be subject to a separate Incremental Default Risk (IDR) charge”.
  • “the Committee has decided that it is not appropriate for CVA to be fully integrated into the market risk framework.
  • “the Committee has confirmed its intention to pursue two key reforms outlined in the first consultative paper: stressed calibration… [and] move from Value-at-Risk (VaR) to Expected Shortfall (ES)”.
  • “The Committee’s approach to address the risks posed by varying market liquidity consists of two elements: First, incorporating ‘liquidity horizons’ in the market risk metric… [Second] capital add-ons against the risk of jumps in liquidity premia.
  • “The Committee is taking a number of steps to strengthen the relationship between models-based and standardised approaches. First, it is establishing a closer link between capital charges resulting from the two approaches. Second, it will require mandatory calculation of the standardised approach by all banks. Third, it will require mandatory public disclosure of standardised capital charges by all banks on a desk-by-desk basis. Finally, the Committee is also considering the merits of introducing the standardised approach as a floor or surcharge to the models-based approach”.

It’s consultative, and you have until 31st January 2014 to get your comments in. Happy Christmas.

Changing my model July 11, 2012 at 7:01 am

A recent Bloomberg story about VAR model changes points out

Wall Street firms routinely give only broad outlines of how their mathematicians calculate VaR, according to data compiled by Bloomberg, and almost nothing about changes in statistical assumptions or the prices they choose to feed into their models… The skewed comparisons can leave investors guessing about whether the potential for loss is rising or falling, according to risk analysts

Um, yeah, and?

OK, let’s be a little more precise. There are three types of VAR model changes:

  1. Data series changes. Here the model is simply updated with new market data. Some firms used to do this infrequently (e.g. quarterly), but the new Basel standards require this to be done at least monthly. Depending on how different the market is in the new period, and how long the data series used to calculate VAR, this change can be material; usually, though, it isn’t.
  2. Risk factor changes. Here a new risk factor is added (or occasionally subtracted). The mapping of products onto risk factors is changed at the same time to accommodate the new risk dimension. This change might be immaterial but often isn’t as typically the change is made to better measure risk.
  3. True model changes, such as going from a variance/covariance VAR to a historical simulation one. These are almost always material.

None of these changes have to be disclosed, and they never have been since the dawn of VAR in the 1990s. Moreover, even if they were disclosed, it would be very very difficult for an outside analyst to understand exactly what was going on. Modern VAR models are so complex, and the portfolios they are applied to have so many risk factors, that getting your head around a single bank’s VAR requires working with it for months if not years.

So what? We, for me, two lessons.

First, VAR tells you little about relative risk. If firm A’s 99% 10 day VAR is $50M and firm B’s is $80M, you can conclude nothing about the relative riskyness of A and B. Indeed it is perfectly possible to find two different VAR models, both approved by regulators, one of which says A is riskier than B and the other says B is riskier than A.

Second, it is astonishing that investors are only just working this out, if indeed this really is the case. Or perhaps Bloomberg is trying to make a story out of something that has been well known in the risk community for twenty years…

So, with apologies to David Bowie

Ch-ch-ch-ch-Changes
(Turn and face the SEC)
Ch-ch-Changes
Don’t want to be a richer man
Ch-ch-ch-ch-Changes
(Turn and face the SEC)
Ch-ch-Changes
Just gonna have to have a different model
Time may change my P/L
But I can’t trace time

Changing models with Jamie May 15, 2012 at 7:12 am

I have finally managed to track down the transcript of Jamie’s embarrassing call. My favourite part relates to JPMorgan’s VAR:

We are also amending a disclosure in the first quarter press release about CIO’s VAR, Value-at-Risk. We’d shown average VAR at 67. It will now be 129. In the first quarter, we implemented a new VAR model, which we now deemed inadequate.

67 to 129. Not an immaterial change then. Ooops.

Update. Does this give JPM a Sarbannes-Oxley problem?

Another update. IFR reports

the Chief Investment Office, the unit responsible for the high-profile loss that JP Morgan disclosed last Thursday, had a separate VaR system.

It used a less stringent calculation that gave a lower risk assessment of its trades, according to people who previously worked at the bank.

The unit also reported directly to CEO Jamie Dimon, a factor which allowed it to maintain a separate risk monitoring set-up to other parts of the investment bank, these people said.

For me, the reporting line and oversight issues are even worse than the VAR model ones.

Backtesting with BaFin November 19, 2011 at 8:37 am

We all know that VAR performed terribly in the crisis, and as a result lost credibility as a risk measure. But how badly?

BaFin’s annual report gives us a clue. It reports both the number of German banks with permission to use VAR models and the total number of backtest exceptions (see page 138).

In 2008 there were 15 German banks with VAR permission; in 2009, 14. Banks calculate 99% VAR so we would expect a trading loss bigger than VAR one day in a hundred or roughly 2.5 per year. With 14 institutions, then, we would expect to see a total of 14 x 2.5 = 35 exceptions. How many were actually reported to BaFin?

  • In 2008, 120.
  • In 2009, 14.

This is startlingly bad. It is also exactly what we would expect knowing the deficiencies of VAR. Models calibrated to the good times of 2005 and 2006 dramatically under-stated risk in 2008, leading to lots of exceptions. Similarly, those same models calibrated to the very volatile markets of 2008 over-state risk in 2009 when things were not (quite) as bad.

Given this powerful evidence, why does Basel 2.5 still require banks to use VAR (plus stressed VAR plus all the other stuff)? max(VAR, stressed VAR) would be a more robust and less procyclical risk measure.

VAR – direction but not comparison October 21, 2010 at 2:42 pm

There has been some talk recently of Morgan Stanley’s VAR vs. the same measure for Goldman. Let’s start with the reasonable from Reuters:

Morgan Stanley’s firm-wide VaR for all assets averaged $142 million a day in the third quarter, up 2 percent from the second quarter and 4 percent from a year ago. Its commodities VaR saw one of the highest growth, rising 17 percent on the quarter and 30 percent from a year ago to average $32 million.

Goldman’s commodities VaR, in comparison, fell 13 percent during the quarter and rose 7 percent from a year ago.

From this we can conclude that it is likely that, providing that none of the firms changed their methodology in the quarter materially, Morgan Stanley is taking more commodity risk than a quarter ago, and Goldman less.

The next headline and the implication is wrong, though:

Morgan Stanley overtakes Goldman in commodity risk

The last time Morgan Stanley’s commodities VaR was above Goldman’s was in the fourth quarter of 2007, before the last commodities boom reached its peak.

You can’t compare one firm’s VAR with another’s: the calculation methodologies, calibrations, risk factors and risk factor mappings are too different. The only way to really know would be to put Goldman’s portfolio through Morgan Stanley’s model and vice versa. Only, um, whisper it quietly, but it is entirely possible that you might find

VAR(MS model, GS portfolio) > VAR(MS model, MS portfolio)

VAR(GS model, GS portfolio) < VAR(GS model, MS portfolio)

That is, it is possible that Morgan’s model might think that Goldman’s portfolio is riskier but Goldman’s model might disagree.

Capital currency October 20, 2009 at 7:02 pm

The usual asset liability orthodoxy is that assets should be funded in the currency they are denominated in, so if you make a Hong Kong dollar loan, you fund it using a HKD liability.

There is no comparable orthodoxy for capital. This is usually held in the firm’s home currency only. It occurred to me today that that is odd for at least two reasons.

First, capital is funding. If you make a $100M loan and take an 8% capital requirement, that $8M is funding too. You only need $92M of deposits or debt to fund the rest.

Second, if that $8M happens to be held in yen, because the bank concerned is Japanese, it is 726M JPY at 90.75. But if dollar/yen moves to 100, that 726M JPY is only $7.26M and the capital is no longer adequate simply due to an FX movement. Which is not helpful.

Surely then, at least roughly, it would make sense to keep capital in the currency of the asset it was supporting. In some cases – notably VAR where diversification makes it hard to say what is supporting what – this is not easy. But in many cases it is. Is there a good reason firms do not do this?

Goldman humour July 14, 2009 at 2:56 pm

From John Kemp via Felix Salmon:

Goldman VAR

How much capital does a bank need? April 30, 2009 at 6:03 am

Probably the best answer is the least amount that gives comfort and confidence to debt holders, regulators, and other stake holders. But that is a moving target.

Over the last twenty years people like me have spent a lot of time trying to construct and improve capital models. At its simplest, a capital model uses some measure of risk to deduce how much capital is required for some portfolio. The problem is that many of these measures of risk have proved highly fallible, and thus capital has been systematically understated. Moreover, thanks (1) to leverage and (2) to the fact that losses are a deduction from capital, even small capital mis-estimates can emperil a bank in a crisis like the current one.

These chickens (or in Citigroup and Bank of America’s case, flocks of giant mutant turkeys) are coming home to roost. Six of the largest nineteen US banks require more capital, according to the FED: and you can be sure other banks around the world do too.

What you can be sure of in these discussions is that the numbers are essentially arbitrary. No one really knows how much capital a bank needs at any given point, not least because risk based capital models have lost their credibility. Capital is adequate if and only if it keeps the relevant stakeholders happy: and risk based estimates no longer keep people happy. So don’t expect the negotiations of the coming weeks necessarily to make sense. It will all be about the deal that can be done when everyone gets around the table.

Update. Matthew Yglesias, via Gary Becker, has a nice characterisation of what we can expect from capital, and indeed regulation in general:

The best you can hope from a regulatory regime is … [a] fairly crude rule will improve on the outcomes generated by the unfettered market… when we’re looking at a regulatory regime that seems to be working okay, and the regulated parties start saying we need tweaks x and y and z and oh there’s no danger there we should be very suspicious. We shouldn’t count on being to fine-tune our results to perfection

Perspex* Steagall? March 13, 2009 at 7:06 am

Paul Volcker suggests that:

the US could perhaps do with a new version of Glass-Steagall, this time splitting hedge funds, private equity funds and proprietary trading off from Wall Street banks.

How would you enforce it, though? You can’t force them to take no risk, not least because banks cannot precisely maturity and FX match their funding, and so banks are net long liquidity which they need to invest. So you would have to have a de minimis risk limit. But how would it be expressed to prevent gaming? Remember that one of the reasons we got into this mess in the first place was that AAA subprime tranches looked very low risk in VAR models. Still, the idea merits discussion.

*No disrespect intended to Senator Carter Glass.

Backtesting quant strategies January 8, 2009 at 11:17 am

Felix Salmon has an excellent post today on quant strategies. As he points out, knowing whether you have something worth trading isn’t exactly easy at the moment.

When you backtest, do you backtest through the quant blow-up of 2007 and the stock-market meltdown of 2008? If so, do you really think that’s going to give you the kind of trading idea which will make money going forwards? And if not, then what do you ignore, and why do you ignore it, and what makes you think you won’t run into a third period of high volatility which will lie well outside any reasonable assumptions you might make?

Up until 2007, the problems with quant funds was that the models didn’t remotely conceive of the world as it transpired. Now, the problem with quant funds is that they can’t help but conceive of the world as it transpired.

This applies more broadly, of course. No one has any recent data on normal markets because there haven’t been any recently…

Right target, wrong ammo June 2, 2008 at 10:33 am

The FT reports:

International regulators and supervisors have started drawing up plans to make it far more expensive for investment banks to hold large volumes of complex financial instruments, such as mortgage-linked securities, in their trading books… though the Basel rules require banks to hold large capital reserves against the risk of credit default in their loan book, regulators only require small buffers for assets held in the trading book if these are labelled as low-risk, according to so-called Value at Risk models.

Gun handUp to a point your honour. Certainly there is a well-documented problem with the imprudence of VAR models, especially (but not only) ones which do not capture all the relevant risk factors. But the credit risk rules are not a shining example of prudence either, especially for low PD portfolios.

One need only compare JPMorgan’s capital allocation for market risk — $9.5B at Y/E 2007 — with its VAR — $107M — to see the problem with trading book capital based on VAR alone. But the solution is not to dump on structured products alone: it is to revamp the entire market risk regime.

Excepting VAR April 28, 2008 at 7:46 am

Even S&P think VAR alone is inadequate as a market risk measure. From “Trading Losses At Financial Institutions Underscore Need For Greater Market Risk Capital”:

The securities markets changed dramatically in 2007, shaking the trading businesses of banks and showing up in their risk measurements. The main metric, the aptly named value at risk (VAR), was rising in conjunction with soaring market volatility. VAR estimates maximum loss for a certain time period–for instance, one-day–to a given confidence interval–such as 99%. However, many banks posted losses much higher than VAR and even greater than their regulatory requirements for the capital they need to hold against market risks.

This situation illustrates the shortcomings of VAR models. Most notably, they are designed to predict losses under normal trading conditions. In addition, they ignore or underestimate certain risks, notably the increasing amounts of idiosyncratic risk arising from new and complex financial instruments that are a feature of today’s trading desks.

[…] To better reflect the magnitude of trading portfolios’ underlying risks, we envisage making a series of upward adjustments to capital requirements under Basel II as part of the calculation of our proposed risk-adjusted capital ratio…

Furthermore the increased number of backtest exceptions this year has not passed unnoticed by supervisors. S&P give a useful graphic showing some large firms had more than ten exceptions in 2007: a lot of this information is available in individual firms 10Qs, so it is hardly secret. Rather than applying the 1996 Market Risk Amendment approach and putting these failing VAR models in the ‘yellow’ or ‘red’ zone with concomitant small increases in regulatory capital, surely the time has come to revisit market risk capital completely and add in some measure of stress capital.

Amplified mortgage portfolio super seniors: a really bad idea April 21, 2008 at 6:42 pm

The UBS shareholder report on the firm’s subprime losses makes fascinating reading and I will try to return to it later in the week. Meanwhile however it is worth noting that a major cause of the UBS losses were AMPs. Let the report take up the story:

[AMPs] were Super Senior positions where the risk of loss was initially hedged through the purchase of protection on a proportion of the nominal position (typically between 2% and 4% though sometimes more). This level of hedging was based on statistical analyses of historical price movements that indicated that such protection was sufficient to protect UBS from any losses on the position.

Let’s try and tease this apart. The bank is long the supersenior tranche in a CMO. They ‘hedged’ this position by buying credit protection on the underlying mortgage portfolio in an amount calculated to minimise short term P/L volatility. I think.

Isn’t this pure gaming of the VAR model? This ‘hedge’ dramatically reduce the VAR. But losses build up in the junior and rise through the mezz, the bank will need to short a larger and larger percentage of the underlying mortgages to remain hedged. In other words this position is massively short credit convexity even if it is credit delta neutral. And even that is assuming that you can short more of the underlying pool into a falling market, an assumption that is highly questionable.

Anyway, even if the AMPs position was not designed to game the VAR model, it certainly achieved that effect:

Once hedged, either through NegBasis or AMPS trades, the Super Senior positions were VaR and Stress Testing neutral (i.e., because they were treated as fully hedged, the Super Senior positions were netted to zero and therefore did not utilize VaR and Stress limits). The CDO desk considered a Super Senior hedged with 2% or more of AMPS protection to be fully hedged. In several MRC reports, the long and short positions were netted, and the inventory of Super Seniors was not shown, or was unclear.

(See here for a discussion of negative basis trading.) For something like this there is real danger that the system’s view is seen as the only reality. If the VAR model says there is no risk, the firm might actually think that’s true.

Next we come to model risk:

The AMPS model was certified by IB [UBS investment bank] Quantitative Risk Control…but with the benefit of hindsight appears not to have been subject to sufficiently robust stress testing. […] The cost of hedging through a Negative Basis trade was approximately 11 bp, whereas the cost of hedging through an AMPS trade was approximately 5 – 6 bp.

So, a positive carry asset hedged very cheaply but leaving a large short gamma position which was not captured by the firm’s risk model. They really were asking to be creamed by a big market move. And then one came along.

Why the long ABS? April 20, 2008 at 7:16 am

Lone shack

Gillian Tett comments on the large supersenior ABS holdings at Merrill and UBS in the FT backed by mortgages on properties like the fine abode above:

Most notably, as these banks have pumped out CDOs, they have been selling the other tranches of debt to outside investors – while retaining the super-senior piece on their books. Sometimes they did this simply to keep the CDO machine running

Absolutely. And also as a funding arbitrage: for a bank that funds at Libor flat and views supersenior as risk free supersenior paying Libor plus ten is a good investment. Tett continues:

[Since] super-senior debt carried the AAA tag, banks were only required to post a wafer-thin sliver of capital against these assets

Again true, but I doubt that the advantageous reg. cap. position of these assets was that important. Any low volatility bond would do in a VAR setting, or any internally highly rated one under Basel 2 in the banking book. And there are plenty of AAAs that yield more than Libor plus ten. The real issue is the risk assessment: some banks managed to persuade themselves this paper was risk free. And that brings us nicely to an article in the WSJ on how exactly the firm got to that assessment. Enjoy.

How much do traders care about capital models? April 6, 2008 at 9:08 pm

I have written before on the perverse incentive in risk sensitive capital models such as VAR, and on the suboptimal design of the Basel 2 capital accord. Reading a guest post by Avinash Persaud on Willem Buiter’s blog, however, I wondered how much this matters. Let me explain. Persaud rightly points out that everyone uses roughly the same sort of market risk capital model and as a result everyone has roughly the same view of the capital against a given portfolio. Moreover this number changes as the models are recalibrated to include more recent data and thus if vols rise, capital does too. But I wonder if Persaud goes too far in what he says next:

Market participants don’t stare helplessly at these results. They move into the favoured markets and out of the unfavoured. Enormous cross-border capital flows are unleashed. But under the weight of the herd, favoured instruments cannot remain undervalued, uncorrelated and low risk. They are transformed into the precise opposite.

In a crisis certainly everyone covers at once, or at least sufficiently many people do in a sufficiently short period of time to cause illiquidity and rapid price falls. Most of the time however most traders do not use VAR to optimise their portfolios. They see VAR as at best an inconvenient limit. So they don’t move into markets their firm’s capital model favours: they move into markets their gut instinct, their broker or their boss favours. While capital models (and risk limits phrased in the same terms) do create an incentive structure, they are not the only nor even the most important determinant of an institution’s risk profile. It would undoubtedly be a good idea to fix these unhelpful incentives. But we should not over-egg the cake by suggesting that capital models actually change bank’s behaviour much except at the margin.

‘Sucks’ is an invariant January 29, 2008 at 8:09 am

If a model sucks today, the strong likelihood is that it will suck tomorrow. VAR sucks. Let’s reprise the case.

VAR is procyclical. As markets rise, volatilities and correlations tend to fall so the VAR for a position goes down, encouraging over-leverage. When they crash, VAR goes up, encouraging firms to cut at the worst moment and exacerbating illiquidity.

VAR models are based on history. The better we calibrate to the recent past, the less accurate VAR is likely to be: this is true for both simple variance/covariance VAR models and for more sophisticated historical simulation ones.

(To see this, consider a delightful note to UBS’s 2004 account.

Over the past two years, growth in asset-backed securities has outpaced other sectors in the fixed income markets. At the same time, our Investment Bank’s market share in this sector has grown, leading to an increase in exposure. To date these exposures have been represented as corporate AAA securities in VAR, leading to a conservative representation of credit spread risk.

To better reflect the risk in Var, we have increased the granularity of our risk representation of such securitized products. In July 2004, the Swiss Federal Banking Commission (SFBC) gave their approval for this change and we have implemented the revised model during third quarter.

The enhanced model added a number of historical data series, which more closely reflect the individual behavior of products such as US Agency debentures, RMBS & CMBS, and other asset backed securities such as credit card & automobile loan receivables.

Then look what happened:

In other words UBS rightly improved their model. But because the improved model was better calibrated to recent conditions, it reduced the risk shown on ABS. ABS ended up costing UBS billions.

VAR gives no insight into the tails. Morgan Stanley’s $480M loss on a (at most) $110M VAR is evidence enough of that.

Many firms have highly non-linear P/L distributions. This means they respond in a highly non-linear fashion to extreme returns, making VAR at a reasonable (i.e. statistically significant) confidence interval even less useful. The evidence for this is discussed here.

We got very lucky with the timing. VAR for reporting regulatory capital was approved in the 1996 Market Risk Amendment to Basel 1. 1996 came at the end of a relatively quiet period in most markets, with no major equity market or emerging market crashes since the 80s. If the matter had come up for consideration in 1997 (South East Asian Crisis), 1998 (Russia, LTCM), 2000 (high tech crash) or 2001 (Argentina, 9/11), the data supporting the veracity of VAR as a risk measure would have looked a lot less good.

See here for a further discussion on Bloomberg, or here for a longer one from Naked Capitalism.

VAR is a reasonable ordinary conditions risk measure. But it gives somewhere between very little and no insight into stress losses. Yet stress losses occur several times a decade. Perhaps we should move towards a simpler capital regime based on large moves and no benefit for diversification. Like, err, the one we had before 1996.

Incentive structures in capital estimation October 14, 2007 at 8:33 am

A capital model creates an incentive structure: if a firm’s estimate of the capital required to support its business rises, then that implies it is taking more risk. At some point increasing risk becomes unacceptable given the firm’s desired soundness standard, and so positions are cut.

Recently there have been some discussion by Gillian Tett in the FT (quoted by Naked Capitalism) of this effect with regard to VAR models. The basic problem in VAR is that risk estimates can increase either because the portfolio has changed or because market volatilities and correlations have increased. Thus with a regularly updated VAR model the same portfolio in a crisis produces a higher capital charge and hence banks are incentivised to cut at the worst moment. Similarly risk estimates are lower in calm markets, encouraging banks to over-leverage.

The effect can be significant. As Ms. Tett points out


[The Bank of England] estimated that a typical bank’s VAR might theoretically double, with the same assets, if volatility increased.

A similar problem occurs in Basel 2 IRB models – in an economic downturn, banks’ estimates of PD and LGD rise, increasing capital, and so discouraging lending. This may intensify the intensity and duration of the downturn.

The phenomenon is known as pro-cyclicality, and it is clearly undesireable. One problem here is that regulators have confused two different kinds of risk sensitivity. Clearly at any point in time having a larger capital estimate for a riskier portfolio than for a less risky one is a good thing: let us call this portfolio risk sensitivity. (Basel 2 doesn’t completely satisfy this either, but we will ignore that for the moment.) Then there is temporal risk sensitivity: here the risk estimate of the same portfolio changes over time as market factors used as inputs to the capital model change. It is much less clear that complete temporal risk sensitivity is a good thing. Using long term average inputs to VAR or IRB models might produce better incentives than short term current market estimates. Such models would have the helpful (in a crisis) property of failing to respond quickly to changes in market conditions.

It might be argued that this means that banks are under-capitalised during tough markets. That would be a reasonable argument if VAR produced capital estimates which reflect possible losses in these markets – but it doesn’t (and it was never designed to). One needs only examine Morgan Stanley’s latest 10-Q to see the phenomenon: their VAR was very roughly $100M yet they suffered a one day loss of $390M. This does not mean that their VAR is broken: VAR is not intended to give an account of how big losses might potentially be. But it does illustrate that modern trading activities can generate losses far in excess of VAR capital estimates, and hence that other risk measures such as stress tests are important too. This relegates VAR to its proper place, as one risk measure amongst a number. In this setting market risk capital would not be based on the VAR alone, and shows that there is no need to have overly temporally risk sensitive capital estimates.