Category / Model risk

If it doesn’t work as a hedge fund strategy, try making policy with it March 7, 2014 at 12:24 pm

Capital structure ‘arbitrage’ is largely discredited as a hedge fund strategy for the rather good reason that a lot of people lost a lot of money on it. An arbitrage, remember, is supposed to involve a risk-free profit. But using the Merton model or its variants to ‘arbitrage’ between different parts of the same companies’ capital structure didn’t work very well — or, at least, it worked well until it didn’t. One of the problems (aside from various liquidity premiums embedded in prices) is that the first generation of these models assumed that the value of a firm’s assets follow a random walk with fixed volatility: which they don’t. In fact it has been known for over two decades that the PDs backed out from Merton models are far too high, something that KMV try to fix with some success at the cost of what might kindly be termed ‘pragmatic adjustments’. Now there may well be capital structure arbitrage models which don’t have these first generation problems and don’t involve arbitrary adjustments, but they are not well known (not least because if you had one that worked, you would want to use it to trade rather than to burnish your academic credibility).

There is an exercise by the US Government Accountability Office to determine how much lower big bank borrowing costs are due to expectations of government bailouts. Stefan Nagel suggests on Bloomberg that there is a risk that, by using a simple Merton-type model, the GAO will “underestimate both the banks’ proper borrowing costs and the implicit subsidy they receive from taxpayers”. True, there is. But there is also the risk that they will overestimate it, not least because as we noted above, models like this are flawed, and they tend to overestimate PDs. I absolutely think that taxpayers deserve to know what the implicit subsidy they are providing to big banks is worth – but by the same token, I think they deserve to know the model risk in those estimates. Scaremongering to suggest that the estimate will necessarily be too low is not helpful here.

Ontogeny recapitulates phylogeny March 4, 2014 at 12:50 pm

Ontogeny recapitulates phylogeny, or ORP, was a hypothesis in evolutionary biology whereby it was conjectured that an organism’s development (ontogeny) will take it through each of the adult stages of its evolutionary history (phylogeny). The word `recapitulates’ is important: this isn’t a strict repeat, and it can include secondary development, variation, even omission (think Beethoven Op. 31 No 2 or the Brahms Piano Quintet). Thus we are not saying that from egg to chicken we get a fish, a lizard-like reptile and an ancestral bird; merely that there may be echoes of one or more of these in a chick.

Noahpinion recently pointed out that ORP is more common in model building than in infant development. He was talking about macro, but it’s a sensible strategy in finance too: you start off with a correlated random walk model, for instance, then change the copula or add stochastic volatility or something. That made me think about the design risk of this process: if you want to hack up a Heston model in a hurry, starting from a quanto model probably isn’t a bad idea; but if you want to do something more novel, then it can lead you down a blind alley. There’s also the efficiency issue: something designed from scratch is likely to be much more efficient than code that has been modified from an earlier application.

When model building, then, think before you modify. Evolution has dead ends too, and you don’t want a model that is the equivalent of a bird with teeth.

My new favourite website January 17, 2013 at 4:34 pm

It was cakewrecks.com, but thanks to Lisa at FT Alphaville there is a new winner: the European Spreadsheet Risks Interest Group horror stories page. How can you resist a site with items like this:

MI5 wrongly collected subscriber data on 134 telephone numbers as a result of a software error, according to interception of communications commissioner Sir Paul Kennedy’s annual report.

A spreadsheet formatting error caused the service to apply for data on the identity of telephone numbers ending in 000, rather than the actual last three digits.

Sure they have JPM too, but c’mon secret agent spreadsheet errors are way sexier.

Valuing non-traded derivatives January 2, 2013 at 2:55 pm

There has been further kerfuffle over Deutsche’s handling of gap options in leveraged supersenior trades. For instance, the FT reports the remarks of a couple of accounting professors. Charles Mulford says

“I believe that the gap risk should have been adjusted to market value – consistent with the views of the former employees,” adding: “One cannot mark-to-market the upside but not the downside.”

While Edward Ketz, according to the FT,

said that in an illiquid market accounting rules still applied and if Deutsche could not determine a market price it should have taken a conservative view and discounted the value of the trade.
“The whole idea of lack of liquidity and lack of knowing what’s out there means the fair value becomes much smaller,” he said.

Leaving aside for a second the fact that few bank CFOs would give a darn what accounting professors think about valuation (for the very reasonable reason that accountants who know their OIS discounting from their DVA are rarer than hen’s teeth), these comments represent a fundamental mis-reading of the valuation process for non-traded instruments.

What is really going on is:

  • Absent a market, you have to value financial instruments using a model.
  • There is almost always a choice of models.
  • The calibration of the model – and indeed its calibratability – matters as much as the mathematics of the model itself.
  • Some models are clearly bad choices when applied to some products as they do not capture essential features of the product.
  • Some calibrations of sensible models are foolish, as they do not reflect where the hedges that will be actually used trade.
  • There is often a wide choice of sensible models with sensible calibrations. There is usually no best choice, and no unambiguous `market value’.
  • Different choices give different valuations.
  • Different quants will have different views on what is `best’. Smart derivatives traders are skeptical of the efficacy of any particular model when applied to non-traded products.
  • You will only know if the model and calibration choice you made was sensible after the product has matured and you have examined whether the hedge strategy you used captured the value that the model said was there.
  • Sometimes it is better not to model a feature using implied parameters if you do not think that it is hedgeable.
  • Taking P/L from this is aggressive, but not something most auditors would have the guts to object to.
  • Deutsche is probably fine, but if you want to know more, you should read Matt Levine.

No Good Deals–No Bad Models December 27, 2012 at 7:45 pm

From Boyarchenko et al. (HT Alea):

Faced with the problem of pricing complex contingent claims, investors seek to make their valuations robust to model uncertainty. We construct a notion of a model-uncertainty-induced utility function and show that model uncertainty increases investors’ effective risk aversion… the impact of model uncertainty is to give greater weight (i.e. greater than the investor’s marginal utility) to states in which losses are relatively large.

Interesting…

The recent history of Jamie Dimon, by Matt Levine October 13, 2012 at 9:47 am

The more I read of Matt Levine, the more I like the guy’s style. Here’s a summary of Jamie Dimon’s recent career Levine penned yesterday:

  • Oversees huge opaque credit hedge position designed to optimize regulatory capital treatment.
  • Tweaks VaR model for that position to be all wrong, understating risk and thus probably overstating capital ratios.
  • Adjusts that position in dubious, risk-increasing ways, driven by capital treatment.
  • Loses a zillion dollars.
  • Trivializes that zillion-dollar loss.
  • Announces a material weakness in internal controls regarding, y’know, that.
  • Is all better.
  • Tweaks VaR model again to show less risk.
  • Gets on pugnacious earnings call in which he says he will ask regulators for additional stock buybacks in next year’s stress tests.
  • Brushes aside a question from a UBS analyst to the effect of “do you think your material weakness in internal controls will impact your ability to do buybacks?”
  • Says “our capital levels will be higher than you think because we will tweak our internal models to get them there.”

From which we conclude first that Levine is funny, smart, and not afraid to talk truth to Dimon; and second that the FED’s model approvals group is going to have a lot of fun with applications from JPM for model ‘enhancements’ in the next few quarters. It is good to have some light shone in that particular corner for once*.

SCSG

*Yes, that was a really weak link for a picture that has nothing to do with the rest of the post. So sue me.

The model raft June 23, 2012 at 12:16 pm

Two posts confirm that many people don’t understand what derivatives models are for. The Epicurean Dealmaker approvingly quotes a commencement address by Atul Gawand:

… you cannot tame chance. That is what makes it chance. At base, implicitly attributing the kind of predictability these individuals seemed to ascribe to chance was a fundamental error, a category-mistake.

This is a lovely turn of phrase, but it is false. You can, sometimes, tame chance. Black-Scholes is an important piece of work because it shows some circumstances under which you can. Yes, those circumstances are limited, and sometimes not realized in practice. But a lot of the time Black-Scholes hedging of simple derivatives works really well. If you combine it with prudent vega hedges and risk limits to constrain the consequences of model breakdown, it works even more of the time.

Next, a piece of farrago derived from the MacKensie/Spears paper.

If a quant comes up with a model and says up front, hey this is just a sketch of something, it’s not totally realistic, but it’s better than nothing, and then the investment bank ignored the quant’s misgivings and bets the house on the model, who is responsible for the resulting risk?

The person who put the trade on is responsible, as is the head of risk. The quant certainly isn’t. This is for two reasons.

First, no financial model is totally realistic. They are all sketches. Some are better than others. That’s why we have risk limits, that’s why we look at how well hedges perform, that’s why we diversify. Buying $50B of AAA ABS isn’t within limit, it isn’t hedged, and it isn’t diversified. In short it has nothing to do with a quant trading strategy and everything to do with a positive carry position that dishonest managers could pretend didn’t have much risk.

Second and relatedly, it is really important to understand the fundamental shift that happened in the crisis from models-producing-hedge-ratios to models-predicting-values. Copula models, like most pricing models, were originally designed to predict hedge ratios, specifically hedge ratios of bespoke CBOs in the underlying bonds. They were not built to predict the risk or absolute value of CDO tranches, although they did that too. Like all hedging models, they needed to be calibrated. The problems arose when (1) the models were used for something they were not designed for – predicting absolute risk and value – and (2) they were mis-calibrated using historical data that turned out to reflect future reality rather badly. The category error was not confusing statistical variation with uncertainty; it was using a model designed for one thing for something else entirely. Rafts are fun to play on on a quiet lake close to the shore; but only the foolhardy try to cross the ocean on one.

The model raft

People cause crises (incentive structure edition) April 30, 2012 at 2:19 pm

Lisa Pollack has an interesting, if rather too fair minded post on Alphaville about the dubious claim that the Black-Scholes formula somehow caused the crisis.

Let’s be clear. Black-Scholes is about options pricing, and hedging in particular. It has nothing to do with securitization, and little with tranching. So the claim that Black-Scholes caused the crisis is BS.

There is a boarder claim that somehow mathematical finance in general – and risk models in particular – were to blame. Certainly many VAR models under-estimated risk before the crisis, while some models of tranches were response for assigning great ratings to assets that didn’t perform well. But no model went out on a wet Wednesday about bought fifty billion of sub-prime ABS. A trader did that. Blame them, and the incentive structure in their firm that encouraged them.

So many choices (but none of them are good) February 14, 2012 at 7:01 am

No, dear reader, not my Valentine’s day, but rather macroeconomic models. Volker Wieland and Maik Wolters has a nice post on VoxEU where they look at the performance of a goodly number of the leading macroeconomic models. This picture in particular struck me:

Macroeconomic model performance

In other words, of the models studied, none – zero, nada, niente – predicted anything like the recession we got, and all the models predicted a materially stronger and swifter recovery than we got. Honestly given this performance shouldn’t there be rather more wailing and knashing of teeth from the economics profession than (with a few honourable exceptions) we have seen?

Transparency and model gaming February 5, 2012 at 12:14 pm

A site with a rather tacky name suggests:

One of the most common reasons I hear for not letting a model be more transparent is that, if they did that, then people would game the model. I’d like to argue that that’s exactly what they should do, and it’s not a valid argument against transparency.

Take as an example the Value-added model for teachers. I don’t think there’s any excuse for this model to be opaque: it is widely used (all of New York City public middle and high schools for example), the scores are important to teachers, especially when they are up for tenure, and the community responds to the corresponding scores for the schools by taking their kids out or putting their kids into those schools. There’s lots at stake.

Why would you not want this to be transparent? Don’t we usually like to know how to evaluate our performance on the job? I’d like to know it if being 4 minutes late to work was a big deal, or if I need to stay late on Tuesdays in order to be perceived as working hard. In other words, given that it’s high stakes it’s only fair to let people know how they are being measured and, thus, how to “improve” with respect to that measurement.

Instead of calling it “gaming the model”, we should see it as improving our scores, which, if it’s a good model, should mean being better teachers (or whatever you’re testing).

This is an interesting point. I certainly agree that is you are going to measure people on x then telling them what x is is only fair. But I would never promise that x was my only criteria for measuring a real world job, as I don’t believe we can write the specification for many activities well enough to always know that maximizing the x-score is equivalent to doing the job well.

(PFI contracts are of course a great example of this; one of the reasons that PFI is a terrible idea is that you can’t write a contract that defines what it means to run a railway well for ten years that stands up to the harsh light of events.)

Thus I would argue that the problem in the situation outlined above isn’t lack of transparency, it is using a fixed formula to evaluate something complicated and contingent. Sure, by all means say ‘these scores are important’, but leave some room for judgement and user feedback too. Humility about how much you can measure is important too.

There is also a good reason for keeping some models secret, and that is the use of proxies. Say I want to measure something but I can’t access the real data. I know that the proxy I use isn’t completely accurate – it does not have complete predictive power – but it is better than nothing. Here for instance is the FED in testimony to Congress on a feature of credit scoring models:

Results obtained with the model estimated especially for this study suggest that the credit characteristics included in credit history scoring models do not serve as substitutes, or proxies, for race, ethnicity, or sex. The analysis does suggest, however, that certain credit characteristics serve, in part, as limited proxies for age. A result of this limited proxying is that the credit scores for older individuals are slightly lower, and those of younger individuals somewhat higher, than would be the case had these credit characteristics not partially proxied for age. Analysis shows that mitigating this effect by dropping these credit characteristics from the model would come at a cost, as these credit characteristics have strong predictive power over and above their role as age proxies.

Credit scoring models are trying to get at ability and willingness to pay, but they have to use proxies, such as disposable income and prior history, to do that. Some of those proxies inadvertently measure things that you don’t want them to too, like age, but excluding them would decrease model performance.

Here, it is better that the proxies are not precisely known so that they are harder to game. The last thing you want in a credit scoring model is folks knowing how best to lie to you, especially if some of the data is hard to check. It is much better to ask for more than you need, as in psychometrics, and use the extra data as a consistency check (or just throw it away) than to tell people how your model works. Its predictive power may well decline markedly if people know how it works.

Of course, you need a regulatory framework around this so that models which try to measure, for instance, race, are banned, but that does not require model transparency. Sometimes it really is better to keep the model as a black box.

The problem with assessing bond return distributions February 1, 2012 at 1:24 pm

Yesterday we saw that one good way of visualizing bond returns is to look separate at the survival probability and the distributions of returns given default (also known as the LGD distribution).

(A minor technical point – in the prior post I used normal LGD distributions, whereas in fact something like a beta distribution might be more suitable.)

We noted too that once we look at the distribution, subtler differences between bonds than just probability of default are obvious. Another example of this is how much uncertainty in recovery there is. Consider this example:

Visualizing bonds 5

These two bonds have the same PD, the same average recovery and hence the same expected loss. But one has more uncertainty in recovery than the other, and hence can reasonably be called riskier.

Now, a plain vanilla tranched security supported by a diverse pool of collateral assets might well have quite a benign return distribution. Losses come from the bottom up, and if the loss distribution of the collateral is fat tailed, and our tranche is not the bottom of the stack, then we might well find something roughly like this (although of course the precise form is subject to considerable debate):

Visualizing bonds 6

In other words, even if you do get a loss, it will likely not be large. The problem though is that this assumption is rather sensitive both to the collateral loss distribution, and to the structure of the securitization. Something like this is entirely possible too:

Visualizing bonds 7

Now, remember first that it is really hard to know what the real loss distribution is – there is a lot of model risk – and second, its shape really effects the expected loss. For instance, for the first tranched ABS above, the expected loss (EL) is only 0.25%, whereas the EL for the second bond is 0.65%. Assessing the real world return distribution of these securities is difficult.

This brings us nicely to informationally insensitive assets. What people want is something with PD = 0 (and so EL = 0). There isn’t any such thing. What is available are assets with small PDs, and unknown loss distributions. Sovereign recoveries are typically low and uncertain: 30 to 40 isn’t a bad guess for an average. AAA ABS, on the other hand, can be structured to have whatever loss distribution the issuer wants. What we learn from this is that it is a serious error looking just at PD or EL in assessing credit quality; you need to get several different views of what the whole return distribution might be like. Moreover, a crisis in the securitized funding markets is caused not just by a reassessment of PDs, but also by a realization that the loss distribution is likely to be more like the third graph above than the second.

Too lazy to be knowns January 30, 2012 at 4:15 pm

You will recall the famous (and unfairly derided) Rumsfeld quote:

[T]here are known knowns; there are things we know we know.

We also know there are known unknowns; that is to say we know there are some things we do not know.

But there are also unknown unknowns – there are things we do not know we don’t know.

Until recently, I thought that this was sensible. But musing about what people knew about AAA RMBS, it strikes me that there is another situation: the things that we are too lazy to know.

Let me explain. In 2006, some people – including some people I knew – knew that AAA RMBS were sensitive to house prices. They knew that prices could fall, and that if they did, the securities would drop in value. However, they had not done the analysis to go beyond that, so they didn’t know how sensitive some of these securities were to a drop in house prices (especially one early in their life, before reserve accounts had been built up). In short, the risk of these assets was not a known unknown, nor an unknown unknown, but more of a too-lazy-to-be-known.

This form of ignorance is widespread and important in finance. People are peripherally aware that there is more to be known about a topic, and that that knowledge is somewhat relevant, but they don’t find out. They prioritize, they ignore stuff that’s difficult; whatever. The point is that it isn’t just Knightian uncertainty and risk that can get you: it is the stuff that some people know, but that you have never looked at properly.

Most things are wrong (and it doesn’t matter) January 13, 2012 at 5:40 pm

For my sins, perhaps in a past life, I used to manage a model verification group. We looked at derivatives pricing models and checked their accuracy. Many of the ones we looked at were somewhat wrong, and some of these we passed anyway. Why?

  • A model is only designed to be used with a domain of applicability. Provided that their are controls in place to make sure it is not used outside that domain, it doesn’t matter that it is wrong there.
  • Moreover all models are a simplifications. They will always break if you stress them enough.
  • Time to market sometimes beats correctness. Being first, even with a slightly wrong model, is sometimes better than being seventh with a more correct one.

In other words, modelling is like crossing a river on lily pads. It isn’t a question of whether things are secure – you know that they are not – it is a question of having sufficiently good judgement that you avoid taking a bath.

It does not surprise me, then, to learn that many research results may be false. People doing complicated things make mistakes, even without bias. Having open data (so that others can build their own model) and open models (so that they can see where yours breaks) helps, but mistakes are still going to slip through. Science, like finance, isn’t ‘correct’; the best it can aim for is ‘not obviously false’, and it might not hit that bar some of the time.

Indeed, ‘correctness’ is a really unhelpful idea in most modelling. Few models are absolutely correct, and certainly very few interesting ones. ‘Close enough, enough of the time’ is much more apposite, and ‘open enough that you can figure that out’ is a good way of helping to get there.

Model risk and your friendly local regulator July 20, 2011 at 1:02 pm

From a fascinating article at AllAboutAlpha (HT FT Alphaville):

[There was a] settlement early this year between the Securities and Exchange Commission (SEC) and the AXA Rosenberg Group LLC (ARG), along with other entities affiliated with ARG…

The specifics of the problem alleged by the SEC turn on the distinction between Barr’s [an affiliate of ARG's] Risk Model proper, and a separate system, called the Optimizer, a program that took data generated by the Risk Model and used it to recommend an optimal portfolio for a particular client based on a benchmark chosen by that client, such as the S&P 500.

SEC charged that after the Risk Model update in 2007, two programmers goofed. They were assigned the task of writing code that would link the new version of that program with the Optimizer. Their coding reported some information to the Optimizer in a decimal form, though other information was expressed as percentages. As a consequence, the Risk Model was working at a less than optimal level from April 2007 onward.

There was no independent quality control of their work. Matters seem to have rolled along in their sub-optimal way until June 2009, when another version of the Risk Model was to be introduced. A new Barr employee “noticed certain unexpected results” when comparing the 2009 model then under preparation to the older 2007 model.

He presented his findings to a Senior Official of Barr later that month and advocated that the error be fixed immediately. But the Senior Official said that it would be fixed with the new model was implemented, that September, and in the meantime told other Barr employees to keep quiet about the discovery, and in particular not to inform ARG’s Global Chief Investment Officer.

It wasn’t until late November 2009 that a Barr employee informed ARG’s Global CEO that there ever had been such an error. Thereafter, that company conducted an internal investigation and disclosed the situation to the SEC examination staff. In April 2010 it took the next step, informing its clients.

None of these entities (ARG, ARIM, and Barr) have admitted or denied any wrongdoing. Together, though, they consented to the entry of an SEC order that assigned joint and several liabilities of $25 million and that separately demanded that they pay $217 million to the clients of ARIM and other advisers affiliated with ARG to redress harm from the coding error.

Does this remind you of the ratings agency CPDO snafu? It is striking that time and time again folks make the assumption that models are somehow internal and proprietary and that if an error is made in one then no one need be told. The SEC’s actions hopefully act as reminder that a fiduciary duty can extend to not providing clients with misleading numbers, and that if your financials depend on model calculations, then you probably have to tell someone if those are materially wrong. JPM’s $3B of model risk reserves make a lot of sense in this context.

Vexed Vix February 24, 2011 at 10:09 am

FT alphaville asks Can you trust the Vix?

The answer rather depends on what you mean by ‘trust’. ‘Can you trust the price of VIX futures to provide the markets’ best estimate of what realised S&P volatility will be?’ is a different and more difficult question to ‘Can you trust the the price of VIX futures to reflect the price at which people are willing to buy and sell VIX futures?’. The connection between the first and second of these is problematic, as we will see.

Why might the price of a Futures contract on a commodity reflect the expectation of the spot price in the future? Because if not, the naive argument goes, you can buy or sell the spot vs. the Future and profit from that difference. That is all very well if you can indeed do both transactions, and if your arbitrage bound includes the costs of doing that (i.e. storage of the commodity or its borrow cost, interest on borrowed or invested money, and so on). But how do you take a position in volatility?

The simple answer is to trade an option. If you delta hedge (at least in the Black-Scholes world), your P/L is a function of the difference between the vol you paid, implied, and the vol that is realised. If the VIX suggests vol is going to be higher than you think it is, and if you believe that the VIX will reflect implieds in the future, then sell an option and delta hedge to the VIX expiry.

Unfortunately that doesn’t work that well as the sensitivity of the option to volatility is a function of spot. It’s like buying a commodity then finding there is a different amount in the warehouse every day. This isn’t helpful. Moreover the VIX prices off the whole volatility smile not just the at-the-moneys, so even if you knew what the at-the-money level was going to be at expiry, you would still be exposed to smile. You could just trade the vol (or var) swap, but that’s OTC so there is a significant amount of infrastructure needed to do the trade.

The other alternative is to take some risk. Let’s look at the VIX curve. It is, as Alphaville notes, in steep contango. If we were to view this as wrong, then we would want to sell the VIX futures and hope that volatility will be lower at expiry than the future predicts. Who would be the other side in this trade? Well, dealers who are short volatility through selling options can hedge by buying the VIX, and they might be keeping the curve in contango. It is much more common to worry about being short vol than long as the OTC equity derivatives business tends to be more about selling options than buying them, so often the industry position is structurally short. What all this means is that there is a good reason for the VIX curve to be in contango, and a good reason to just keep rolling a futures position front or second month vs. one year. If it were me, then, I’d forget the no arb arguments – the arb is too hard to get on, and there is too much money on the other side for it to come in – and just play the curve.

Woodwork not research October 28, 2010 at 6:06 am

Is all quantitative financial risk management bunk?

No. Next.

OK, I will honour this idiotic question with a more detailed reply than it deserves.

Quantiative financial risk management is not an attempt to understand the fundamental dynamics of the market. It does not pretend to model the world they way that physics does. It simply aims to provide a set of tools which are useful in managing financial risk. Risk management, as she is practiced, is like woodwork: you want a table, we’ll make you a table to fit your budget. If you’re cheap, it might not look pretty. It certainly won’t bear the weight of an elephant. But you will be able to sit around it and eat dinner.

No one believes that Black-Scholes is right, for instance. If it were, there would be a single implied vol for every strike and maturity, and that vol would not vary from day to day. But Black-Scholes is a useful tool for managing options risk. Like a good carpenter, we deal with the issues by shaving a bit off here and bashing it into shape there. The result is, 99.9% of the time, useful. Things go wrong when the results of this process are used outside their domain of applicability. Don’t try to sit a pachyderm on a charity shop* table and you won’t hear splintering sounds.

*Thrift store for my American readers.

Theory danger October 15, 2010 at 1:04 pm

Exhibit one, from Ricardo Caballero:

In this paper I argue that the current core of macroeconomics—by which I mainly mean the so-called dynamic stochastic general equilibrium approach—has become so mesmerized with its own internal logic that it has begun to confuse the precision it has achieved about its own world with the precision that it has about the real one. This is dangerous for both methodological and policy reasons.

Exhibit two, a suggestion for a margin calculation I heard suggested this week:

A high dimensional generalised Pareto distribution model calibrated at 99.9%

Now honestly, dear reader, I am as much in favour of appropriate modelling as the next geek, but this tendency despite everything we have been through to believe that we can model financial systems to a high degree of confidence is profoundly worrying. Caballero has it right: there is a profound confusion of precision in the author’s mind with precision in the real world…

How it trades affects how it behaves July 30, 2010 at 6:56 am

Part of a continuing series of facts that seem obvious, but are in fact news: the changing nature of equity market trading (more bots, more ETFs) has affected the dynamics of the market. (See here for a previous item.) Specifically, according to FT alphaville (who cite Barclays research) equity correlation has had a close relationship with the increased ETF volumes. Barclays say:

An important structural shift in the equity markets over the past few decades has been the advent of index funds as an alternative to actively managed mandates… Since the dominance of this kind of index-based component in the equity fund flows should logically lead to an increase in equity correlation, it is tempting to theorize that the secular shift in equity correlation documented above is driven by this effect.

Then do (a little) analysis and conclude

while equity correlation continues to be highly dependent on volatility, the rise in indexation has led to a permanent increase in its “base” level

File under ‘n’ for ‘not a surprise’ and ‘w’ for ‘why did you think you could count on a correlation to tell you very much about the comovement of the market anyway?’

Model risk provisions February 18, 2010 at 9:12 am

Zero hedge picks up on some interesting information at the end of an Economist article:

JPMorgan Chase holds $3 billion of “model-uncertainty reserves”

That number feels reasonable in the context of a bank with nearly a hundred billion of capital. But I’d love to know how they got it past their auditors…

Paradigm hunting December 3, 2009 at 3:12 pm

Like most things which create careers and make money, science isn’t what it claims to be.

It claims to be objective; validated by experiment; unbiased. Of course it isn’t because that takes far too much time. Usually the cranks are exactly that. So it would be an awful waste to test their claims or otherwise take them seriously. Similarly the promotions are in the hot topics, the topics that are getting published in the big journals. Stick with those, stick with the orthodoxy, and you have a career. This is entirely rational: paradigm changing science comes along infrequently, and it is a very good working assumption that any given anomalous result is a screw up rather than a harbinger of a dramatic new theory. Moreover, scientists are people: they have rivalries, jealousies, and such like too.

Scientists, then, for entirely practical and understandable reasons, don’t do science very objectively. And mostly that does not matter. A really good idea will win out eventually, albeit possibly after its creator has died. Some middling good ideas never make it, but the loss is not huge given the increase in efficiency that seeming-crank-avoidance brings. It’s OK, really, most of the time.

(Much of the landscape here has been surveyed by sociologists of science, such as Pierre Bourdieu. Donald MacKenzie has a nice take in the LRB here.)

Unfortunately, as Daniel Henninger points out in the WSJ, when politics enters the picture, things become rather less OK. I don’t agree with much of the Henninger article. However, his basic point – that the failure of scientists at the East Anglia Climate Research Unit to act the way scientists are supposed to act has caused great damage to the image of science – is sound. And it is a great pity.

First, it is worth saying that most people’s emails, if widely published, would cause some embarrassment. It is no surprise that things are no different for scientists.

Second, as we have seen in several cases recently, politics asks too much from science. Or at least politicians do. The real answer to many, perhaps most scientific questions is we don’t know. Experts give advice based on best guesses. This is particularly the case with climate models: like models in many other areas, they are approximations. We think that they work. There is good evidence that the work in some domains. But we are using them well beyond the area that we are really comfortable with. That means that there is model risk. So yes, climate change might not be as bad as our best guess – and it might be worse. It might happen sooner or later than we think. The balance of risk versus cost strongly suggests doing something now, and something pretty drastic. But we can no more know for sure that this is the right thing to do than we can know for sure that the sun won’t explode tomorrow.

What we need desperately is more evidence based politics. This requires three things:

  • Carefully gathering evidence, and using the best available theory to analyse it;
  • Forming policy on the basis of that analysis;
  • Ongoing review, including changing your mind if the evidence and/or the theory changes.

Politicians find that last part particularly hard as it can involve loss of face. But what would you rather have, someone trying to do the right thing, while acknowledging that they might be wrong about what that thing is, or someone who has blind faith in their decisions whatever the evidence?