Category / Probability Theory

Another interpretation of 7% = safe October 4, 2011 at 5:00 pm

There is another way of understanding yesterday’s anomalous result that banks with a Basel III ratio greater than 7% did not fail, yet it seemed from some rather rough maths as though some of them should have done. It is simply that more capital protects a bank for longer, and thus you don’t need enough capital to survive a big shock, just enough to survive long enough for the authorities to intervene. In other words, some of those highly capitalized banks would have failed absent intervention…

Arguing against myself October 3, 2011 at 3:23 am

Well, a little bit.

Quite a few posts recently have argued against higher capital requirements. That doesn’t mean I don’t believe banks don’t need higher capital, just that it should not be a minimum requirement (because only capital above the minimum can be used to absorb losses). I do think, though, that those capital increases should be delayed.

Given this, you might think I would be pleased when The Clearing House produced a document suggesting that a capital ratio above 7% was not necessary. Unfortunately, part of this document made me suspicious. They

Analyzed the relationship between Basel III capital ratios of large global banks at the onset of the financial crisis (defined as December 2007), and subsequent Bank distress during the crisis.

123 banks were in the sample, representing 65% of global banking. The document reports the following results:

Pre crisis Basel III capital ratio Probability of distress
<4.5% 43%
4.5% – 5.5% 29%
5.5% – 7% 22%
>7% 0%

It was that zero that worried me. It was too convenient, and it didn’t seem to fit with the other data points. So I did some modelling.

Warning: what comes next is very, very crude.

For rather general (i.e. not very good) reasons, we would expect this distribution to be fat tailed. Given the three data points – not very many – we can’t fit anything sophisticated, so you will have to make do with this:

Failure Probability as a Function of Capital (Data plus Fit)

Note that the fit to the data points isn’t terrible. Now, obviously we are extrapolating a long way beyond what we know, but still, using the fit we get

Basel III capital ratio Probability of distress
8% 13%
10% 7%
12% 4%

99% safety comes at a capital ratio of 18% and 99.9% at 25%.

Now, I would be the first to say that this ‘analysis’ is far more than the data will support. But still, it is interesting that this work suggests that you would have needed a very high capital ratio to get a low probability of failure during the 2008 crash. (Of course it tells us nothing about what might be needed in other, as yet unforeseen market events.)

Just as a final riff, what is the liklihood that the clearing house observed zero failures in the >7% bucket if our fit was correct? They say that there are roughly the same number of banks in each bucket, so the >7% bucket should contain roughly 31 banks. We will assume all these banks have a Basel III ratio of 8%. Then we would expect to see 13% x 31 = 4 distress events. Assuming a binomial distribution, the chances of seeing zero events when you would expect 4 from 31 is 1.5%. This is not low enough to conclude that our fit is wrong. Moreover if even one out of the those 31 is questionable and should really be counted as distress, then the probability jumps to 6.5%. So, to be charitable, the Clearing House might be unlucky rather than mendacious if the analysis above is broadly correct.

Tarring Taleb January 21, 2009 at 12:44 pm

I have always been a little suspicious of Nassim Taleb. He seems to take too much pleasure in discussion of crises. And his first book — a very conventional account of hedging — isn’t actually very useful for actually running portfolios of options. Now a post on Models and Agents (an excellent blog I have only found recently) gives a more focussed critique:

the current crisis is not a black swan. Alas, the world’s economic history has offered a slew of (very consequential) credit and banking crises … So not only aren’t credit crises highly remote; they can be a no-brainer, particularly if they involve extending huge loans to people with no income, no jobs and no assets.

Taleb also recommends that we buy insurance against good black swans—that is, investments with a tremendous (though still highly remote) upside but limited downside. For example, you could buy insurance against the (unlikely?) disappearance of Botox due to the discovery of the nectar of eternal youth. And make tons of money if it happens.

And that surely is the point. Yes, the unexpected happens with considerable frequency. But knowing which black swan is more likely than the market is charging for is the hard part. Buying protection in the wings on everything is far too expensive to be a good trading strategy. If all Taleb’s observations amount to is the claim that being long gamma can sometimes be profitable, then they are hardly prophetic. What would be much more useful would be his analysis of when, exactly, black swan insurance is worth buying.

Epicurean Dice December 12, 2008 at 2:41 pm

The Epicurean Dealmaker has a post about risk and uncertainty. He makes some good points, and I want to expand on one of them, that is the respect we should have for the random nature of the markets.

Think about it like this. Mostly in finance we assume that we have the equivalent of a standard dice. That is, while we assume we don’t know what number will come up next, we think that we know the distribution of numbers perfectly. In fact the real situation is much more akin to throwing a dice where we have imperfect knowledge of what numbers are on the faces. They might be 1 to 6; but they also might be 1 to 5 with the 1 repeated; or 2 to 7; or something else entirely. Worse, the numbers are changed by the malevolent hand of chance on a regular basis. Not so often that we know nothing about the distribution, but often enough that we cannot be sure that the current market will be like the past.

Thus our risk estimates are potentially wrong for at least two reasons. We might have been wrong about the past distribution. And even if we got that right, it might be different in the future. In other words, you can’t manage risk effectively by assuming you know the distribution – to be effective, you really must assume that you don’t. Thus you don’t just want your risk to be low enough based on one model: you want it to be low enough based on all (or at least all likely) models.

Decision making and compensation March 10, 2008 at 12:16 pm

Alea references a fascinating paper by Richard Herring and Susan Wachter, Bubbles in Real Estate Markets. This makes an important point about the stability of financial return distributions:

The ability to estimate the probability of a shock – like a collapse in real estate prices – depends on two key factors. First is the frequency with which the shock occurs relative to the frequency of changes in the underlying causal structure. If the structure changes every time a shock occurs, then events do not generate useful evidence regarding probabilities.

And of course this is the case in most (all?) financial situations. The next crisis is never like the last one, in part because trading behaviour is changed by memories of the last one, in part because of product innovation and the changing structure of financial intermediation.

The important question then is

How do banks make decisions with regard to low-frequency shocks with uncertain probabilities? Specialists in cognitive psychology have found that decision-makers, even trained statisticians, tend to formulate subjective probabilities on the basis of the “availability heuristic,” the ease with which the decision-maker can imagine that the event will occur. […]

This tendency to underestimate shock probabilities is exacerbated by the threshold heuristic (Simon(1978)). This is the rule of thumb by which busy decision-makers allocate their scarcest resource, managerial attention. When the subjective probability falls below some threshold amount, it is disregarded and treated as if it were zero.

Compensation policies make this worse. Suppose you buy insurance against a rare event that everyone else ignores. Your return suffers relative to them, and hence in most years (when the rare event does not happen), you are paid less. On the other hand, if the event does happen, the bank’s total income will almost certainly fall anyway, compensation will be tight, and you will likely not be rewarded properly for being the far-sighted manager you are. On the other hand, in the bad year those who did not buy the insurance acted like nearly all their peers, and hence will probably not be punished too badly.

How linear is Goldie? November 1, 2007 at 1:29 pm

A non-linear position translates a normal distribution of market returns into a non-normal distribution of P/Ls. It used to be reasonable to assume that trading revenues were normally distributed. For instance, here is Goldman from Q3 2004 (picked up via Alea):

Now, though, some of the broker dealers are strongly non-linear. Here’s Goldman Q3 2007:

Morgan Stanley for the same period is similarly non-normal:

There’s nothing inherently wrong with this, but it does mean that any old style investment model (CAPM, for instance) which relies on normal returns won’t deal with stocks like this correctly, and concepts like beta with the market are less meaningful.

Smoothly runs the Don June 21, 2007 at 3:35 pm

Earlier, I wrote something about changing distributions. More recently a couple of examples of this phenomena have come up, so let’s make things concrete.

Suppose x(t) is randomly distributed according to N(0,s(t)) [normal distribution with mean zero and standard deviation s(t)] where s(t) is a continuous, bounded and slow function of t. Suppose we sample x(t) discretely. (Take s(t) = 2 + sin(t) with t in years and daily sampling, for instance.)

The variables x(t) are not iid, but (under a bunch of smoothness conditions) they are locally iid: we can think of the distribution being fibred over time and a small change in time inducing a small change in the distribution. One might hope that one could import a lot of non parametric statistics into this setting, where we were trying to gain information about the variation of s(t) by sampling the xs.

Note that this is not a stochastic volatility model or a GARCH model: volatilities are not random, but rather determined by an initially unknown function.

Is this sort of thing well known I wonder? It’s similar to local volatility models, but there we have a situation where we can deduce s(t) with certainty as we know the prices of all vanilla options, whereas I’m more interested in a situation where we can only observe s(t) by sampling x(t).

One of the applic- ations, as in this picture of a large piece of metal hanging from a crane a hundred feet above a busy junction, is operational risk, but there are others which may be even more slippery.

Through the maths darkly November 18, 2006 at 6:09 pm

Seeing a recent Boing Boing post on a talk by Mandelbrot, it occurred to me to blog about a worry I have had for a while about probability…

It’s like this. The classical formulation of probability theory due to Kolmogorov is based on the idea of repeated identical experiments. We take many copies of a system, perform a measurement lots of time, and review the distribution of outcomes.

This make sense for many application areas: the most obvious place it works is statistical mechanics. Here we look at a physical system comprised of many many identical units each taking properties from the same distribution. But for everything from the mathematics of poker to the behaviour of random biological mutations, the idea of having samples from the same random process makes sense. If we only have one experiment but we can imagine the possibility of conducting further tests to the same system the set up remains reasonable.

One of Mandelbrot’s areas of interest is finance, and here when we are examining the random behaviour of markets, we can think of the underlying as being locally stable. The FTSE is not the same today as ten years ago, nor does it behave in a similar fashion, but it does seem (mostly) similar enough today to yesterday that (at least for short periods of time) we can talk about the random process generating the FTSE. This still makes sense since today’s observation is done on a system that is fairly close to that used for yesterday’s. (Of course whether that process is Lévy, Fréchet or whatever is another matter entirely, and I am assuming there wasn’t a crash today or yesterday.)

The problem comes when we cannot even in principle imagine conducting the same experiment more than once. For instance, one ‘theory’ of jet lag is that the body clock goes ‘chaotic’. The idea here is that if I fly from London to New York, rather than my body clock going from London time to New York time, it becomes disrupted, then eventually settles at the new time. The word ‘chaos’ is helpful in that it highlights the disruption (and the need for forcing to speed the return to normality) but it is unhelpful in that it suggests that it makes sense to talk about what ‘time’ my body clock is showing when I am jet lagged. It doesn’t since you cannot copy my body clock so you can see how tired I feel in various situations, or when I wake up during various tests on the same jet lag. If you cannot sample the process more than once is this situation really one where the idea of a random variable taking a value is meaningful?

For that matter, one might have a similar philosophical issue with using Poisson (or any other kind of random) processes to model corporate defaults. There are not many corporates that default more than once, so again is this a situation where the idea of an underlying random process makes sense? How do we even know that there is a stable generating process if we cannot even in theory do more than one experiment? Epistemologically it looks dodgy to me…