Category / Model risk

Copula counterfactual April 28, 2009 at 6:30 am

How different would the world be if David Li had written about a variety of different copulas rather than just the Gaussian one? (Do read the excellent Sam Jones piece that the link points to.)

More on models April 22, 2009 at 8:16 am

From Daniel Kahneman, via portfolio.com:

A group of Swiss soldiers who set out on a long navigation exercise in the Alps. The weather was severe and they got lost. After several days, with their desperation mounting, one of the men suddenly realized he had a map of the region.

They followed the map and managed to reach a town. When they returned to base and their commanding officer asked how they had made their way back, they replied, “We suddenly found a map.” The officer looked at the map and said, “You found a map, all right, but it’s not of the Alps, it’s of the Pyrenees.”

Correlation is not causality April 3, 2009 at 8:51 am

From the social science statistics blog via Naked Capitalism, an amusing illustration of this truth:

Lemons vs. fatalities

Sociologists do models, kinda February 13, 2009 at 6:59 am

From Reflexive Modeling: The Social Calculus of the Arbitrageur by Daniel Beunza and David Stark:

Modeling entails fundamental assumptions about the probability distribution faced by the actor, but this knowledge is absent when the future cannot be safely extrapolated from the past…

By privileging certain scenarios over others, by selecting a few variables to the detriment of others, and in short, by framing the situation in a given way, models and artifacts shape the final outcome of decision-making. This … is the fundamental way in which the economics discipline shapes the economy, for it is economists who create the models in the first place…

…models can lead to a different form of entanglement. In effect, models can lock their users into a certain perspective on the world, even past the point in which such perspective applies to the case at hand. In other words, models disentangle their users from their personal relationship with the actor at the other side of the transaction, but only at the cognitive cost of entangling them in a certain interpretation.

Despite the focus on relatively uninteresting models (merger arb), this is an interesting paper for anyone interested in how traders really use models.

Epicurean Dice December 12, 2008 at 2:41 pm

The Epicurean Dealmaker has a post about risk and uncertainty. He makes some good points, and I want to expand on one of them, that is the respect we should have for the random nature of the markets.

Think about it like this. Mostly in finance we assume that we have the equivalent of a standard dice. That is, while we assume we don’t know what number will come up next, we think that we know the distribution of numbers perfectly. In fact the real situation is much more akin to throwing a dice where we have imperfect knowledge of what numbers are on the faces. They might be 1 to 6; but they also might be 1 to 5 with the 1 repeated; or 2 to 7; or something else entirely. Worse, the numbers are changed by the malevolent hand of chance on a regular basis. Not so often that we know nothing about the distribution, but often enough that we cannot be sure that the current market will be like the past.

Thus our risk estimates are potentially wrong for at least two reasons. We might have been wrong about the past distribution. And even if we got that right, it might be different in the future. In other words, you can’t manage risk effectively by assuming you know the distribution – to be effective, you really must assume that you don’t. Thus you don’t just want your risk to be low enough based on one model: you want it to be low enough based on all (or at least all likely) models.

Warren’s puts November 25, 2008 at 10:53 pm

There is a very nice post on Financial Crookery about Warren Buffett’s written puts.

Let us assume that BRK sold $40bn notional 20 year puts (over 4 indices) in 2006-2007 at an average equivalent S&P 500 level of 1400. At the prevailing swap rate and dividend yields, and implied volatility of around 24%, this would have realised premia of approximately $4.5bn, close enough to the premia actually received not to worry too much about the exact details of the transactions.

The undiscounted future value of this liability, ie the fair value expectation of payment in 2027, is presently around $19bn. (At the money long dated volatility has expanded to 38%; this option now is well in the money and the skewed volatility for 1400 strike is more like 33%). The present value of this liability, before the impact of credit spreads, is around $10bn using the current swap curve.

So far, so simple. But this valuation does not take account of the credit spread of the writer of the put…

[The writer then goes on to estimate the credit effect and to speculate on whether BRK uses such credit-effected prices for its own mark to market. My reading of FAS 157/159 is not only that it can but that it must.]

The only issue I might take issue with it is that the article uses Black Scholes with vols that seem rather low (33%) to value Warren’s 18 year puts. These are far out of the money forward, and I am always a bit nervous about using Black Scholes for long-dated OTM puts – my guess would be that different process-theoretic assumptions would increase the value of the position (i.e. increase Warren’s loss). Kudos to Goldman though for buying these options: all that downside vol in size must make hedging their index books fun at the moment.

No arbitrage requires arbitrageurs November 20, 2008 at 6:14 am

No arbitrage conditions are not natural laws. You can only rely on them if there are enough arbitrageurs around to keep the markets in line. At the moment, that isn’t true in many settings. John Dizard points out an example from the Tips market:

seven-year Tips bonds are asset swapping at 130 basis points over Libor

As Dizard says, this is partly because the Tips are illiquid and hard to finance (and thus to leverage), and partly because there is not enough risk capital around:

The dealers can’t afford to make efficient markets, given their decapitalisation, downsizing, and outright disappearance. That means anomalies sit there for weeks and months, where they would have disappeared in minutes or seconds. The arbs, well, they thought they had risk-free books with perfectly offsetting positions. These turned out to be long-term, illiquid investments that first bled out negative carry, and then were sold off by merciless prime brokers.

AIG and default correlation mis-estimation November 15, 2008 at 11:50 am

Felix Salmon has a nice piece on AIG FP’s strategy and why it went so badly wrong.

When AIG wrote protection on CDOs and the like, it got insurance premiums in return, and considered those premiums to essentially be free money, since (according to AIG’s own models, and those of the ratings agencies) the chances of those CDOs defaulting were essentially zero.

…AIG’s biggest mistake was in failing to realize that this business couldn’t scale in the way that most insurance does scale. Most insurance does scale: if you insure a house against fire, for instance, it’s easy to lose much more money than was paid in insurance premiums. But if you insure houses across the country against fire, you’d need a nationwide conflagration in order to lose lots of money.

… The reason AIG’s models said the CDOs couldn’t suffer any losses was that house prices don’t fall in all areas of the country simultaneously. Since AIG was only insuring the last-loss CDO tranches, investors with lower-rated tranches took the risk that prices in Florida, or Arizona, or California might fall. AIG would only lose money if prices fell in all those states at once — which is, of course, exactly what happened.

In other words, AIG’s models assumed default correlation would be low, and that there was a good measure of diversification benefit between the different CDOs it had written protection on. In reality once house prices turned down there was very little diversification, default correlations leaped up, and the mark to market on many of AIG’s contracts turned against them, necessitating the collateral postings that brought the insurer into the welcoming arms of the FED.

The strange and the completely predictable September 5, 2008 at 3:29 pm

Two facts. It is with no shock whatsoever that I report that Moody’s has made yet another stuff up in CPDO modelling.

What is bizarre, though, is that someone at the FT seems to know what a copula is. See here for both.

What is a derivatives pricing model anyway? September 4, 2008 at 12:29 pm

I had a conversation about this last night and thought it was worth writing some of it down and extending it a little. So…

Let’s begin with the market. For our purposes there are some known current market variables which we assume are correct. This could be a stock price, interest rates, a dividend yield — and perhaps one or more implied volatilities.

Secondly we have a model. The model is often, but not always, standard, i.e. shared between most market participants. Let’s start with standard models. Here the model is first calibrated to the known market variables.

At this point we are ready to use the model. There is a safe form of use and a less safe one. In the safe one we use the model as an interpolator. For instance we know the coupons of the current 2, 3, 5, 7 and 10 year par swaps (plus the interest rate futures prices and deposits) and we want to find the fair value coupon for a 4.3 year swap. Or we know the prices of 1000, 1050 and 1100 strike index options and we want to price a 1040 strike OTC of the same maturity.

The less safe use is when we use the model as an extrapolator. We want a 12 year swap rate, for instance, or the price of a 1200 strike option. That’s not too bad provided we don’t go too far beyond the available market data, but it is definitely a leap.

(Both of these, by the way, count as FAS 157 level 2.)

Note that there are two ways that we realise P/L in derivatives. Either we trade them or we hedge them. If we are in the flow business then trading is important. We need to use the same model as everyone else simply because we are in the oranges business and we need to kInow what everyone else thinks an orange is worth. We take a spread just like traders of other assets, buying for a dollar and selling for a dollar ten, or whatever. The book might well be hedged while we are waiting to trade, but basically we are in the moving business. Swaps books, index options, short term single stock, FX, interest rate and commodity options, and much plain vanilla options trading falls into this camp.

In the hedging business in contrast we trade things that we do not expect to have flow in. Most exotic option businesses are an example here, as are many long dated OTC options. There is no active market here so instead we have to hedge the product to maturity. Thus here the model hedge ratios are just as important as the model prices. Valuation should reflect the P/L we can capture by hedging using the model greeks over the life of the trade. Thus standard models are more questionable in the hedging business than in the moving business since it is not just their prices — which are correct by construction — but also their greeks that matter.

Things start to get really hairy when we move away from standard models. Now we are almost certainly dealing with products where there is no active market (some kinds of FX exotics are a counterexample) and we do not even know that the model prices are correct. There is genuine disagreement across the market as to what some of these things are worth. Different models also produce radically different hedge ratios. How can we judge the correctness of such a model? The answer is evident from the previous paragraph: it is correct if the valuation predicted can genuinely be captured by hedging using the model hedge ratios. [Note that this does not necessarily give a unique ‘correct’ model.]

In summary then: for flow businesses we need interpolators between known prices and, to a lesser extent, extrapolators. For storage businesses we need models which produce good hedge ratios.

What does delta hedging a tranche mean? July 1, 2008 at 9:19 am

Some old research from, of all people, Bear Stearns, makes fascinating reading. It is about delta hedging CDX and iTraxx tranches, just about the simplest possible hedging problem in structured credit (in that index itself is the hedge, and both that and the tranches are liquid).

Suppose we have sold a tranche of the CDX. What it the delta with respect to the index? The standard definition would say something like

delta = (price of tranche at index spread plus 1bp – price of tranche at index spread) / 1bp

But there is a hidden correlation assumption: we calculate this delta at constant base correlation. Thus delta hedging will only be P/L minimising if

  • spread movements are small;

  • rehedging is possible after a small spread movement; and
  • base correlation remains constant.

The first two assumptions have not held recently with even the hitherto liquid CDX and iTraxx displaying jumps and illiquidity. But interestingly even back in 2004 the last one was known not to hold either. Here is the tracking error of delta hedging each of the CDX tranches from the Bear’s research:

CDX Hedge

And here are the realised deltas (i.e. I think the best deltas ex post) vs. the calculated ones (ex ante from the model):

CDX Hedge

And remember, that is the easiest hedge in structured credit. If the simplest position to hedge when the market was not particularly troubled gives you 3% tracking errors, what is it like trying to delta hedge a bespoke hybrid CDO at the moment?

Excepting VAR April 28, 2008 at 7:46 am

Even S&P think VAR alone is inadequate as a market risk measure. From “Trading Losses At Financial Institutions Underscore Need For Greater Market Risk Capital”:

The securities markets changed dramatically in 2007, shaking the trading businesses of banks and showing up in their risk measurements. The main metric, the aptly named value at risk (VAR), was rising in conjunction with soaring market volatility. VAR estimates maximum loss for a certain time period–for instance, one-day–to a given confidence interval–such as 99%. However, many banks posted losses much higher than VAR and even greater than their regulatory requirements for the capital they need to hold against market risks.

This situation illustrates the shortcomings of VAR models. Most notably, they are designed to predict losses under normal trading conditions. In addition, they ignore or underestimate certain risks, notably the increasing amounts of idiosyncratic risk arising from new and complex financial instruments that are a feature of today’s trading desks.

[…] To better reflect the magnitude of trading portfolios’ underlying risks, we envisage making a series of upward adjustments to capital requirements under Basel II as part of the calculation of our proposed risk-adjusted capital ratio…

Furthermore the increased number of backtest exceptions this year has not passed unnoticed by supervisors. S&P give a useful graphic showing some large firms had more than ten exceptions in 2007: a lot of this information is available in individual firms 10Qs, so it is hardly secret. Rather than applying the 1996 Market Risk Amendment approach and putting these failing VAR models in the ‘yellow’ or ‘red’ zone with concomitant small increases in regulatory capital, surely the time has come to revisit market risk capital completely and add in some measure of stress capital.

Amplified mortgage portfolio super seniors: a really bad idea April 21, 2008 at 6:42 pm

The UBS shareholder report on the firm’s subprime losses makes fascinating reading and I will try to return to it later in the week. Meanwhile however it is worth noting that a major cause of the UBS losses were AMPs. Let the report take up the story:

[AMPs] were Super Senior positions where the risk of loss was initially hedged through the purchase of protection on a proportion of the nominal position (typically between 2% and 4% though sometimes more). This level of hedging was based on statistical analyses of historical price movements that indicated that such protection was sufficient to protect UBS from any losses on the position.

Let’s try and tease this apart. The bank is long the supersenior tranche in a CMO. They ‘hedged’ this position by buying credit protection on the underlying mortgage portfolio in an amount calculated to minimise short term P/L volatility. I think.

Isn’t this pure gaming of the VAR model? This ‘hedge’ dramatically reduce the VAR. But losses build up in the junior and rise through the mezz, the bank will need to short a larger and larger percentage of the underlying mortgages to remain hedged. In other words this position is massively short credit convexity even if it is credit delta neutral. And even that is assuming that you can short more of the underlying pool into a falling market, an assumption that is highly questionable.

Anyway, even if the AMPs position was not designed to game the VAR model, it certainly achieved that effect:

Once hedged, either through NegBasis or AMPS trades, the Super Senior positions were VaR and Stress Testing neutral (i.e., because they were treated as fully hedged, the Super Senior positions were netted to zero and therefore did not utilize VaR and Stress limits). The CDO desk considered a Super Senior hedged with 2% or more of AMPS protection to be fully hedged. In several MRC reports, the long and short positions were netted, and the inventory of Super Seniors was not shown, or was unclear.

(See here for a discussion of negative basis trading.) For something like this there is real danger that the system’s view is seen as the only reality. If the VAR model says there is no risk, the firm might actually think that’s true.

Next we come to model risk:

The AMPS model was certified by IB [UBS investment bank] Quantitative Risk Control…but with the benefit of hindsight appears not to have been subject to sufficiently robust stress testing. […] The cost of hedging through a Negative Basis trade was approximately 11 bp, whereas the cost of hedging through an AMPS trade was approximately 5 – 6 bp.

So, a positive carry asset hedged very cheaply but leaving a large short gamma position which was not captured by the firm’s risk model. They really were asking to be creamed by a big market move. And then one came along.

Interest rate markets turbulence March 21, 2008 at 7:50 am

One of the features of an illiquid, crisis-hit market is that arbitrage relationships break down. Another is a dramatic rise in settlement risk as bonds, even the best bonds, become illiquid. We are seeing both at the moment:

  • The usual relationship between forward Libors and spot is breaking down by as much as ten basis points. In Euros for instance the 3m Libor 3m forward given by the futures is significantly different from that given by the spot Libors. (Sorry, I can’t find a link to this.)

  • Alea reports on a massive increase in repo market fails. The raw data is here.

The FT reports that a fund has been caught in another IRD storm:

A $3bn London hedge fund lost more than a quarter of its value on Monday as it became the biggest victim of the unwinding of a popular Japanese government bond trade that hit many rivals this week.

Endeavour Capital, run by former Salomon Smith Barney fixed-income traders, told investors it fell 27 per cent as a highly leveraged bet on the spread between short- and long-dated JGBs was hit by contagion from the US financial crisis and domestic worries. […]

Hedge funds scrambled to unwind the so-called “box trade” – betting that 20-year bond and swap spreads would widen as seven-year spreads narrowed – early on Monday when the market moved sharply against them.

Presumably then they were long 20 year swap spread and short seven year, under the assumption that sevens were wide compared with twenties. Reuters gives more detail:

On Tuesday the 20-year swap rate was quoted at 2.050 percent compared with a 20-year bond yield of 2.150 percent, meaning the swap rate is 10 basis points below the bond yield.

The spread had reached to near -20 basis points on Monday when hedge funds and players were scrambling to unwind positions in a market where it is increasingly difficult to get trades done due to the worsening credit crisis, especially in the United States.

By contrast, the five-year yen swap spread soared to a record peak near 40 basis points last week. That move has stemmed partly from the jump in yen LIBOR rates due to the money market squeeze in the United States and Europe.

(The emphasis is mine and I have removed the Reuters tickers.)

In thin markets with forced sellers it is worth bearing in mind the simple fact that if there are more sellers than buyers, prices go down. Usually moves follow arbitrage channels as there are enough prop players willing and able to exploit opportunities. But at the moment there is not enough risk capital to bring these arbs in. Until there is, the markets will remain disconnected.

Update. Paul Krugman sets out another aspect of this dislocation here, noting that plummeting 1m T bill rates have not resulted in a matching fall in FED funds. They have fallen, but bank’s reluctance to lend to each other even via the FED has kept FED funds at a relatively high spread over 1m CMT.

1y CMT vs FED funds

Profit from the Bayesians February 9, 2008 at 10:46 pm

The markets do not only trade on participants’ assessment of the value of securities. They also trade on participants’ assessment of the beliefs of other participants about the value of securities. If you think everyone is going to sell a stock tomorrow, you’ll sell it today, regardless of your assessment of its long term value, because if need be you can buy it back more cheaply the day after tomorrow.

If you can successfully determine what market participants will do, arbitrage opportunities sometimes result. At the moment there may be one concerning the quantitative equity funds. We know roughly what these funds do – they try to exploit market dislocations. The fund has an idea of what is a dislocation because it has a model of what relationships ‘should’ hold. And these models are often calibrated using some form of Bayesian network.

Now here’s the interesting bit. It seems many of these funds have broadly the same position, or at least the same kinds of position. We suspect this because when one quant fund liquidates it appears that some of the others take a bath. The fund being liquidated had longs and shorts based on their model and when these longs are sold, causing these stocks to fall in price, and the shorts are bought back, causing them to rise, presumably the opposite behaviour to the one predicted by the model is observed. Since funds share models or at least modeling assumptions, this hurts a number of quant funds simultaneously.

So, here’s the plan. First figure out what that model portfolio looks like, roughly. It should not be too hard to make a plausible guess of the basic structure of the Bayesian model some of the funds use: just run it, and get a portfolio – we’ll call this portfolio A. Then figure out a portfolio that mostly is market neutral (and in particular does not lose much money on those days that the market relationships predicted by the model do hold) but makes a lot of money if portfolio A has a really bad day. You can think of this as a far out of the money put on the model’s correctness. Wait for next quant fund liquidation (which is bound to come as someone is always over-leveraged) then buy a yacht. Or some kind of boat, anyway.

Negative basis trading February 8, 2008 at 8:02 am

The FT recently discussed negative basis trades. Here is the basic idea.

  • A bank buys a bond, typically a long dated one.
  • The bank buys a CDS or a financial guarantee policy to maturity of the bond from a counterparty, often a monoline.
  • The bank would then hedge against the risk that the protection it had bought was ineffective often with another monoline.

This was profitable despite the multiple layers of protection since the credit spread of the bond was bigger than the cost of the first and second hedges combined. Remember a bond spread includes compensation for much more than the risk of default: it includes compensation for illiquidity, for the volatility of the value of the bond, and so on. The bank is basically monetising those premiums.

Most of these trades were done in the trading book so the banks concerned booked the PV of the difference between the credit spread of the bond and the cost of the protection up front.

With the monolines at AAA and monoline protection some tens of basis points, this approach was not a huge issue. But now monoline protection is hundreds of basis points and the AAA ratings might not be with us very long. Also, the bonds used were often either the supersenior tranches of CDOs or long dated inflation linked debt. The latter isn’t a problem: the former is, since the underlying credit quality of these bonds has gone South for the winter too. And negative basis trades are out there in size. The FT says:

Bob McKee, an analyst at Independent Strategy, a London research house, believes that up to $150bn worth of CDO business done by the monolines could be negative basis trades.

Doubtless there are some who will use these trades as another stick to beat the already bloodied body of structured finance. I would suggest the reality is somewhat different. The problem isn’t the trades themselves: it is the selective use of mark to market. Marking the trades up front is fine providing you do it properly. That means:

  • Credit adjusting the pricing of all derivatives. The details are complex here but basically you PV the value of net cashflows from a counterparty back along their risky credit curve rather than along the Libor curve. This has the effect as a counterparties’ credit quality decreases of automatically marking down your trades.
  • In particular valuing trades with realistic default correlation assumptions. In particular the only time that you need written protection on supersenior ABS is when the ABS market is in trouble – and that is just when the monolines are in trouble. Therefore the probability of joint default of the bond and the protection seller is not

    PD(bond defaults) x PD(monoline defaults)

    As it would be if they were independent. Instead it is something a lot closer to

    min(PD(bond defaults), PD(monoline defaults))

    Since the default correlation is so high. For the full negative basis trade it is reasonably close to

    min(PD(bond defaults), PD(monoline1 defaults), PD(monoline2 defaults))

    The real problem, then, is that some banks may have used naive default correlation assumptions in marking these trades and hence they are carrying them at an inflated value.
  • Using realistic funding assumptions in valuing the position. I shudder to think about this, but it would not be massively surprising to discover that some of these trades were also valued under the assumption that the bank could fund the bond at Libor flat forever. That means in effect that the position has again be overvalued up front and will show a net carry loss over time.

Of course none of these issues would have seen the light of day without the declining credit quality of the monolines. But it does highlight the fact that those banks which have prudent P/L recognition and state of the art valuation policies are much better placed to withstand market turmoil than those who don’t.

‘Sucks’ is an invariant January 29, 2008 at 8:09 am

If a model sucks today, the strong likelihood is that it will suck tomorrow. VAR sucks. Let’s reprise the case.

VAR is procyclical. As markets rise, volatilities and correlations tend to fall so the VAR for a position goes down, encouraging over-leverage. When they crash, VAR goes up, encouraging firms to cut at the worst moment and exacerbating illiquidity.

VAR models are based on history. The better we calibrate to the recent past, the less accurate VAR is likely to be: this is true for both simple variance/covariance VAR models and for more sophisticated historical simulation ones.

(To see this, consider a delightful note to UBS’s 2004 account.

Over the past two years, growth in asset-backed securities has outpaced other sectors in the fixed income markets. At the same time, our Investment Bank’s market share in this sector has grown, leading to an increase in exposure. To date these exposures have been represented as corporate AAA securities in VAR, leading to a conservative representation of credit spread risk.

To better reflect the risk in Var, we have increased the granularity of our risk representation of such securitized products. In July 2004, the Swiss Federal Banking Commission (SFBC) gave their approval for this change and we have implemented the revised model during third quarter.

The enhanced model added a number of historical data series, which more closely reflect the individual behavior of products such as US Agency debentures, RMBS & CMBS, and other asset backed securities such as credit card & automobile loan receivables.

Then look what happened:

In other words UBS rightly improved their model. But because the improved model was better calibrated to recent conditions, it reduced the risk shown on ABS. ABS ended up costing UBS billions.

VAR gives no insight into the tails. Morgan Stanley’s $480M loss on a (at most) $110M VAR is evidence enough of that.

Many firms have highly non-linear P/L distributions. This means they respond in a highly non-linear fashion to extreme returns, making VAR at a reasonable (i.e. statistically significant) confidence interval even less useful. The evidence for this is discussed here.

We got very lucky with the timing. VAR for reporting regulatory capital was approved in the 1996 Market Risk Amendment to Basel 1. 1996 came at the end of a relatively quiet period in most markets, with no major equity market or emerging market crashes since the 80s. If the matter had come up for consideration in 1997 (South East Asian Crisis), 1998 (Russia, LTCM), 2000 (high tech crash) or 2001 (Argentina, 9/11), the data supporting the veracity of VAR as a risk measure would have looked a lot less good.

See here for a further discussion on Bloomberg, or here for a longer one from Naked Capitalism.

VAR is a reasonable ordinary conditions risk measure. But it gives somewhere between very little and no insight into stress losses. Yet stress losses occur several times a decade. Perhaps we should move towards a simpler capital regime based on large moves and no benefit for diversification. Like, err, the one we had before 1996.

Wolfing down the crunch December 13, 2007 at 7:50 am

There is a fascinating article by Martin Wolf in today’s FT. As usual, let me quote selectively and comment.


[The credit crunch has] called into question the workability of securitised lending, at least in its current form. The argument for this change – one, I admit, I accepted – was that it would shift the risk of term-transformation (borrowing short to lend long) out of the fragile banking system on to the shoulders of those best able to bear it. What happened, instead, was the shifting of the risk on to the shoulders of those least able to understand it. What also occurred was a multiplication of leverage and term-transformation, not least through the banks’ “special investment vehicles”, which proved to be only notionally off balance sheet.

I would distinguish between asset-backed lending and securitised lending, but Wolf is broadly right. It has turned out that the information asymmetry problems in the ABS market are too large to be easily dealt with in some case. The lack of alignment of interests has made this worse.

Banks became more leveraged through the use of SIVs, conduits and so on, but these vehicles just allowed banks to do what they always did – take liquidity, default and term structure risk – more efficiently. The real issue is why only one of these risks, default risk, has regulatory capital assigned against it. (There is no capital charge for interest rate risk in the banking book, and no capital charge for liquidity risk.)


What, more precisely, should a central bank do when liquidity dries up in important markets? Equally, the crisis suggests that liquidity has been significantly underpriced.

Cut rates and broaden the range of collateral eligible at the window, as the FED has just done.


Does this mean that the regulatory framework for banks is fundamentally flawed?

Yes. See here, here, here and here.


What is left of the idea that we can rely on financial institutions to manage risk through their own models?

A better understanding of model risk as here, here, or here.


What, moreover, can reasonably be expected of the rating agencies?

Not a lot. Why did you ever think otherwise?


A market in US mortgages is hardly terra incognita. If banks and rating agencies got this wrong, what else must be brought into question?

It’s not the market, it is the structure. Dollar yen spot is probably the most liquid asset in the world, and certainly one that is very well commented upon. Yet I can structure a dollar yen exotic option whose price is genuinely uncertain (because it is radically different depending on your modeling assumptions). There is a measure of caveat emptor here, though: why did people buy structures they did not understand?


Do you remember the lecturing by US officials, not least to the Japanese, about the importance of letting asset prices reach equilibrium and transparency enter markets as soon as possible? That, however, was in a far-off country. Now we see Hank Paulson, US Treasury secretary, trying to organise a cartel of holders of toxic securitised assets in the “superSIV”. More importantly, we see the US Treasury intervene directly in the rate-setting process on mortgages, in an attempt to shore up the housing market.

As George Monbiot pointed out in a nice article about Matt Ridley (ex chairman of Northern Rock), it is often the most vocal proponents of the open market for other people who are the most dirigist when it comes to their own business. Yes, there is a massive measure of hypocrisy here, and moreover the intervention is not well-designed: the MLEC is looking more and more dodgy, and the Bush proposal for mortgage modifications will either cover very few borrowers or make lawyers rich.


A US recession is possible.

Yep. Likely even.

A leading regulator gets it. Finally. Maybe. December 12, 2007 at 11:15 am

Sheila Bair, chairman of the Federal Deposit Insurance Corporation, is trying to wake her fellow supervisors up to the issues in the internal models approaches to capital in Basel 2. She is reported to have said:


Current financial market turmoil has shown up the weaknesses of the models used in the advanced approaches to assessing credit risk under the international Basel II bank safety rules…

The first lesson of the crisis in terms of the Basel II capital adequacy rules is “beware models!”

Bair is right of course. The models are necessarily inaccurate, since calibration is problematic, procyclical, and tend to support asset price bubbles.


The weaknesses “startlingly revealed” in the Basel II models by the turmoil “are a very bright, flashing yellow light warning us to drive very carefully,” …

The FDIC chairman has previously expressed doubts about what she has described as the “untried” risk models underlying the advanced Basel II approaches that banks can use to assess their credit and operational risks and determine the minimum capital they need absorb shock losses.

Remember just because it backtests for some period doesn’t mean it is right. A stopped clock backtests well in the few seconds around the time it is stopped at.

Update. The Telegraph (yes, I know, but I couldn’t find a reference from a reputable paper in a hurry) carries a report on a slightly alarmist but interesting speech from Peter Spencer of the Ernst and Young Item Club. Spencer says that the Basel 2 rules


..are the root cause of the crunch and were serving to worsen the City’s plight.

Dismissing the assumption that banks are not lending to each other on the money markets because they lack confidence in each others’ potential solvency, he argued that they were, in practice, prevented from lending the cash at all because it could leave their balance sheets falling foul of the Basel regulations.

Presumably the idea is that now lines of credit below 364 days are not free in capital terms, banks are rationing capital and hence not lending. This seems unlikely, especially as the capital usage of short term interbank lending is low. I wonder what evidence Spencer has.

What is the nature of a model price? November 7, 2007 at 7:25 am

In an otherwise excellent post on mathematical finance and its fallacies, Epicurean Dealmaker says something that might be confusing:


Black-Scholes works not because it describes some external ontological fact about how pricing relationships between securities and their derivatives have to work; it works because everyone agrees, more or less, that that’s how prices should work. It is a convention, not a physical or financial law.

Black-Scholes is a convention for quoting prices. In particular when we say ‘6000 strike 5 year FTSE puts are trading at 21 vol’ what we mean is ‘if we put 21% into Black-Scholes along with all the other parameters, we get the right price for this option’, that is our expected cost of hedging it. What we do not mean is that the dynamics of the FTSE follows a log normal diffusion as Black-Scholes assume.

Things get dangerous when we go from interpolation to extrapolation. Using Black Scholes to deduce the price of the 5950 strike if we know market price of the the 5900 and the 6000 strike options is fairly safe. Using it to deduce the price of a ten year option when we only know the prices of five year instruments is more dangerous, especially in the presence of persistent fat-tails and autocorrelation. Using it (or indeed anything else) to bet large numbers of dollars on the cost of delta hedging a path dependent exotic option is alarming (unless you are leaving before the real cost of the hedge strategy becomes clear).

Now, about those $100 strike oil digitals…