Category / Rules

Healing circuits March 29, 2013 at 12:51 pm

I have endured a couple of talks recently on the use of network methods in financial stability analysis. While the general idea is interesting, the specific applications struck me as dubious in the extreme. So it was with some relief that I read something about robustness that was useful and impressive – albeit with no finance connection. Science Daily reports:

a team of engineers at the California Institute of Technology (Caltech)… wanted to give integrated-circuit chips a healing ability akin to that of our own immune system — something capable of detecting and quickly responding to any number of possible assaults in order to keep the larger system working optimally. The power amplifier they devised employs a multitude of robust, on-chip sensors that monitor temperature, current, voltage, and power. The information from those sensors feeds into a custom-made ASIC unit on the same chip… [This] brain analyzes the amplifier’s overall performance and determines if it needs to adjust any of the system’s actuators…

[The ASIC] does not operate based on algorithms that know how to respond to every possible scenario. Instead, it draws conclusions based on the aggregate response of the sensors. “You tell the chip the results you want and let it figure out how to produce those results,” says Steven Bowers… “The challenge is that there are more than 100,000 transistors on each chip. We don’t know all of the different things that might go wrong, and we don’t need to. We have designed the system in a general enough way that it finds the optimum state for all of the actuators in any situation without external intervention.”

Owning things you don’t understand August 15, 2012 at 3:00 pm

Dealbreaker has an interesting article on a case involving Wells Fargo mis-selling SIV-issued CP to a muni. They go through the specifics of the case (which are outrageous – the salesguy didn’t know what a SIV was when he sold them the paper), but then they riff on the structure of the system that let this happen.

If you think it’s a bad thing that various municipalities got singed when a bunch of overlevered investments in subprime securities blew up – and you do, right? what are you, a monster? – then where do you place the blame? The municipalities? I mean, sure, absolutely, they were dopes, but their job was to be dopes – they thought they could rely on layers and layers of paid advisors who seemed to owe them something. The people who built the SIVs? Yes, they were clearly arbitraging the inattention of various other parties in order to maximize profits, so, bad on them, except: they were traders, and arbitraging others’ inattention (within antifraud rules, etc.) was their job. The rating agencies? Absolutely: they were dopes too, but their job was not to be dopes, so bad work – but bad work protected by the First Amendment.

Brokers who put unsophisticated customers into these trades are a good target: unlike structurers and raters who can hide behind legal disclosure, the brokers’ job was actually to find suitable investments, so it’s fair enough for them to get in trouble when they didn’t even try to do that. So good work on the SEC for fining them – and for fining them an amount that, while pretty small, is still 100x what they made on selling this paper.

I have to agree with this. There’s nothing clearly evil about structuring SIVs. The ratings agencies could have done a better job, but given that there seems to be no legal sanction available to use against them, I guess we will have to let them lie. The people who are really culpable are the ones the muni paid to do their due diligence for them, and who should have been acting in their interests. But, as so often with professional advisors, there doesn’t seem to be enough at stake; sure, a fine is nasty, but shouldn’t the sanction for advising someone to buy something that you don’t understand and that isn’t suitable for them be higher than writing a cheque?

The pizza system May 20, 2012 at 3:07 pm

One of the inspirations for this blog was Italian coffee. I mean that both literally – quite a lot of my posts are written with the aid of an espresso – and more theoretically; coffee in Italy is wonderful, cheap, and available very widely. I was curious how they managed it, and that lead to the very first post on this blog, over six years ago.

Today an article in the Guardian made me think about another incredibly efficient system in Italy, that of pizza. Again we have something that is cheap, readily available, and world-beatingly good*. Indeed, some of the finest pizza is also the cheapest. A meal and drink at the multi-award-winning Da Michele in Naples, for instance, will cost you less than ten euros. If you are anything like me, you will also think that this is the best pizza you have ever tasted.

The Guardian now says that there is a trend for gourmet pizza with exotic ingredients; figs, for instance, or truffle oil. That’s fine. People will always try to find a way to charge more for something that is less good. It worked pretty well for Starbucks, after all, so it might work for some of these ‘ultra-pizza’ chefs. But I strongly doubt that it will influence true Italian pizza, just as I doubt Starbucks will ever make much money in Italy. The original is just too good for a challenger to be that successful.

This robustness is a characteristic of good socio-economic systems. So too is the property that everyone profits, but no one makes too much. There isn’t much growth – a small town only needs one decent pizza joint – but there is solid demand for good quality. Tasty tomatoes and buffalo mozarella goes in, pizza comes out, and everyone – including the tomato grower and the cheese maker – is happy. Moreover, because everyone knows what good pizza is like, there is little demand for bad pizza. Thus there is a positive feedback loop which keeps average quality high, at least in much of Southern Italy.

Another good thing about the pizza system is that it is democractic. You can’t get a better pizza at Da Michele by paying more; you can’t jump the queue by slipping the Maitre d’ a hundred, not least because there isn’t a Maitre d’. The system serves a broad spectrum of interests, not a narrow one. Long may it continue.

*One of the most absurd claims I ever heard was that there was better pizza made in Brooklyn than in Naples. This is akin to claiming that the AMC pacer is a better looking car than the Ferrari F12 Berlinetta.

Pay for performance from politicians (shock) April 28, 2012 at 11:58 am

An interesting initiative this, from the US. The HuffPo explains:

Both Houses of Congress are currently considering a bill which, in my humble estimation, would be wildly popular with the public — if they knew about it, that is. This is a truly non-partisan issue, one that pits every taxpayer in the country against the 535 members of Congress themselves — regardless of their party affiliation. The idea is a simple one, as evidenced by the bill’s official title: the “No Budget, No Pay Act.”

That’s it in a nutshell. The title is so good, it barely needs explaining. If Congress doesn’t pass a completed budget on time — both the budget blueprint and the 12 appropriations bills necessary — then when the new federal fiscal year dawns on the first of October, they stop getting paid. Their paychecks halt until the budget is complete, and they are not allowed to (later on, under the cover of night) award themselves retroactive pay for this period.

What is interesting about this – apart from the idea itself, which rocks – is that it pits the interests of all politicans (or very nearly all, at least) against those of the people. As such, it is very hard to get measures like these enacted. When you have cartel parties, though, such measures are necessary.

Update. It turns out that this comes from a rather interesting group, No Labels. Check them out.

Local vs global optimization for corporations April 5, 2012 at 6:34 am

Doug had a very interesting comment to my post about evolutionary diversity and banking. I’ll set up the problem, then quote some of his comment and try to give my spin on his questions.

Essentially we are concerned with an unknown fitness landscape where we are trying to find the peaks (best adapted organisms or most profitable financial institutions) based on changes in makeup (genetics or business model). The landscape might have one peak, or two, or twenty seven. The peaks might be similar height, or they might be wildly different; the local maxima may or may not be close to the global maxima or they might not. Moreover, you only have knowledge of local conditions. The question is how you optimize your position in this landscape.

This is related to the topic of metaheuristics… A typical scenario would be a doing a non-linear regression and finding the model parameters that maximizes fit on a dataset. In the scenario there’s no analytical solution (e.g. in linear regression) so the only thing you can do is successively try points [to see how high you are] until you exhaust your resource/computational/time limits. Then you hope that you’ve converged somewhere close to the global maximum or at least a really good local maximum.

The central issue in metaheuristics is the “exploitation vs exploration” tradeoff. I.e. do you spend your resources looking at points in the neighborhood of your current maximum (climbing the hill you’re on)? Or do you allocate more resources to checking points far away from anything you’ve tested so far (looking for new hills).

One of the most reliable approaches is simulated annealing. You start off tilting the algorithm very far towards the exploration side, casting a wide net. Then as time goes on you favor exploitation more and more, tightening on the best candidates.

Simulated annealing is good for many of these kinds of problems; there are also lots of other approaches/modifications. A couple of things to note though; there is no ‘best’ algorithm (ones that are good on some landscapes tend to fail really badly on others, while those that do OK on everything are always highly suboptimal compared with the best approach for that terrain); moreover, this class of problems for arbitrary fitness landscapes is known to be really hard.

In what follows, I’ve taken the liberty of replacing the terms ‘exploration’ and ‘exploitation’ for ‘wide ranging exploration’ and ‘local exploration’ as I don’t think ‘exploitation’ really captures the flavour of what we mean. Back to Doug:

I believe the boom/bust cycle of capitalism operates very much like simulated annealing. Boom periods when capital and risk is loose tend to heavily favor wide ranging exploration (i.e. innovation). It’s easy to start radically new businesses. Bust periods tend to favor local exploration (i.e. incremental improvements to existing business models). Businesses are consolidated and shut down. Out of those new firms and business strategies from the boom periods the ones that proved successful go on to survive and are integrated into the economic landscape (Google), whereas those that weren’t able to establish enough of a foothold during the boom period get swept away (Pets.com).

All of this is tangentially related, but it brings up an interesting question. Most of the rest of the economy (technology is particular) seems to be widely explorative during boom times. Banking in contrast seems to be locally explorative even during boom times, i.e. banking business models seem to converge to each other. Busts seem to fragment banking models and promote wider exploration.

So why is banking so different that the relationship seems to get turned on its head?

The cost of a local vs. global move is part of it. Local moves are expensive for non-financials, almost as expensive as (although not as risky as) global moves. That makes large moves attractive in times when the credit to finance them is cheap. When credit is expensive and/or rationed, incremental optimization is the only thing a non-financial can afford to do.

It’s different for many financials however. The cost of climbing the hill – hiring CDO of ABS traders – is relatively small compared to the rewards. Moreover there is more transparency about what the other guys are doing. Low barriers to entry and good information flow make local maximisation attractive. To use the simulated annealing analogy, banking is too cold; there isn’t enough energy around to create lots of diversity.

And is this a bad thing for the broader economy, and if so why?

I think that it is, partly because the fitness landscape can change fast and leave a poorly adapted population of banks. Also, there are economies of scale for financial institutions and high barriers to entry for deposit takers and insurers (if not hedge funds), so there are simply not enough material size financial institutions. It is as if all your computation budget in simulated annealing is being spent exploring the neighbourhood of two or three spots in the fitness space.

A large part of the answer, it seems to me, is to make it easier to set up a small bank and much harder to become (or remain) a very large one.

Holding back evolution March 30, 2012 at 7:26 am

Hans asked a good question in comments to a prior post:

when you start off with 40 medium size banks, eventually a few will have a better business model than the others. And then the business model gets copied (due to shareholders seeing the return at the more successful banks and wanting the same) which leads to a convergence to 40 banks with (more or less) the same business model. Basically what we saw in the run-up to the financial crisis. After which the take-overs can begin due to economy of scale.

In other words: I agree that ‘evolution’ thrives on diversity, but how do you prevent convergence to one (or 2/3) business models?

I have to say that that one has me stumped for now. The fitness landscape changes fast for banks, so rapid change (what an evolution biologist would call Saltation) is the norm. If we let evolutionary pressure bear on a diverse set of creatures in a fitness landscape with a single peak – a single business model – the ones that don’t climb the peak aren’t very successful. So do we have to imagine legislators coming in like comets every 50 years and imposing diversity again? That’s pretty depressing.

The problem is the premise: a single-peaked fitness landscape. Diversity is encouraged when there are lots of local maxima in the fitness landscape. We need, in other words, to make sure that lots of banking different models are acceptably profitable. There are two ways to do this of course: lifting up the little guys (aka the wildlife sanctuary approach) or crushing the big guys (aka a cull). To your elephant guns, gentlemen.

Being around long enough to be wrong March 7, 2012 at 3:30 pm

I had coffee with someone today who – very politely – asked an interesting question. She said I had changed my position on some things over the years and wondered how that had come about. I said something about the credit crunch having shown the flaws in a lot of positions, my own included, then mentioned the desirability of risk sensitive capital rules as something that in particular I was embarrassed at my previous whole hearted support for.

That was true so far as it went, but thinking about it as I walked back, the full story is a little more complex. I think many people who worked in investment banks in the 90s drank the kool aid (to use an American cliche) to some degree. We didn’t see the credit crunch coming; we believed that more liquidity was always better; and that the more efficient markets were, the better they would allocate capital. What consenting investment banks and their clients did in private, we believed, was their own business provided that it was legal. In short, we lost sight of systemic risk and we have a very simplistic idea of the connection between finance and the real economy.

I still don’t believe that most of the folks who worked in investment banks in the 1990s were particularly evil, nor particularly culpable for the crunch. One can point at a few key figures – Greenspan, Mozilo, Fuld and so on – but that’s just an entertaining sideshow. The real story is about the failure of institutional arrangements. The nature of US mortgages and the US mortgage distribution system; lax regulation of broker/dealers and monolines (and to a lesser extent banks); an abject failure by most of the industry and its supervisors to understand how dangerous liquidity risk is; abandonment of caveat emptor by everyone from RMBS buyers to mortgage borrowers: these were some of the more important causes of the crisis. That’s why the rules of the game are so important. Bad people can’t do too much damage in a system with good rules. But even good people end up contributing to something very bad if controls are lax and incentives are wrong.

Would have vs. did February 15, 2012 at 9:59 am

The Libor probe is heating up. Bloomberg reports:

Global regulators have exposed flaws in banks’ internal controls that may have allowed traders to manipulate interest rates around the world, two people with knowledge of the probe said.

Investigators also have received e-mail evidence of potential collusion between firms setting the London interbank offered rate, said the people, who declined to be identified because they weren’t authorized to speak publicly.

Now I have no special knowledge of this situation, and I have no idea whether banks did indeed manipulate Libor. But I do think that the design of Libor is inherently flawed. When they are asked to submit the quotes that are averaged to produce Libor banks are asked

“At what rate could you borrow funds, were you to do so by asking for and then accepting inter-bank offers in a reasonable market size just prior to 11 am?”

Note the hypothetical: ‘could you… were you to…’. In other words, Libor isn’t necessarily the average of rates at which banks did borrow, but rather of rates at which they estimated they could probably borrow. That’s a big difference, and it makes it much harder for a bank to be sure that the numbers it submits to the Libor panel are right. [The issue is that banks don't borrow for longer tenors unsecured very much at all, so something like ten month Libor - or even one year Libor - may well be the average of guesses rather than of actual market rates.]

Obviously here there is a tension between having a lot of rates some of which are less well defined and having a rate that is really market price based. Personally though I would have thought that building a multi-hundred-trillion dollar industry on prices that may be the average of guesses might create some issues, such as the risk of manipulation…

Transparency and model gaming February 5, 2012 at 12:14 pm

A site with a rather tacky name suggests:

One of the most common reasons I hear for not letting a model be more transparent is that, if they did that, then people would game the model. I’d like to argue that that’s exactly what they should do, and it’s not a valid argument against transparency.

Take as an example the Value-added model for teachers. I don’t think there’s any excuse for this model to be opaque: it is widely used (all of New York City public middle and high schools for example), the scores are important to teachers, especially when they are up for tenure, and the community responds to the corresponding scores for the schools by taking their kids out or putting their kids into those schools. There’s lots at stake.

Why would you not want this to be transparent? Don’t we usually like to know how to evaluate our performance on the job? I’d like to know it if being 4 minutes late to work was a big deal, or if I need to stay late on Tuesdays in order to be perceived as working hard. In other words, given that it’s high stakes it’s only fair to let people know how they are being measured and, thus, how to “improve” with respect to that measurement.

Instead of calling it “gaming the model”, we should see it as improving our scores, which, if it’s a good model, should mean being better teachers (or whatever you’re testing).

This is an interesting point. I certainly agree that is you are going to measure people on x then telling them what x is is only fair. But I would never promise that x was my only criteria for measuring a real world job, as I don’t believe we can write the specification for many activities well enough to always know that maximizing the x-score is equivalent to doing the job well.

(PFI contracts are of course a great example of this; one of the reasons that PFI is a terrible idea is that you can’t write a contract that defines what it means to run a railway well for ten years that stands up to the harsh light of events.)

Thus I would argue that the problem in the situation outlined above isn’t lack of transparency, it is using a fixed formula to evaluate something complicated and contingent. Sure, by all means say ‘these scores are important’, but leave some room for judgement and user feedback too. Humility about how much you can measure is important too.

There is also a good reason for keeping some models secret, and that is the use of proxies. Say I want to measure something but I can’t access the real data. I know that the proxy I use isn’t completely accurate – it does not have complete predictive power – but it is better than nothing. Here for instance is the FED in testimony to Congress on a feature of credit scoring models:

Results obtained with the model estimated especially for this study suggest that the credit characteristics included in credit history scoring models do not serve as substitutes, or proxies, for race, ethnicity, or sex. The analysis does suggest, however, that certain credit characteristics serve, in part, as limited proxies for age. A result of this limited proxying is that the credit scores for older individuals are slightly lower, and those of younger individuals somewhat higher, than would be the case had these credit characteristics not partially proxied for age. Analysis shows that mitigating this effect by dropping these credit characteristics from the model would come at a cost, as these credit characteristics have strong predictive power over and above their role as age proxies.

Credit scoring models are trying to get at ability and willingness to pay, but they have to use proxies, such as disposable income and prior history, to do that. Some of those proxies inadvertently measure things that you don’t want them to too, like age, but excluding them would decrease model performance.

Here, it is better that the proxies are not precisely known so that they are harder to game. The last thing you want in a credit scoring model is folks knowing how best to lie to you, especially if some of the data is hard to check. It is much better to ask for more than you need, as in psychometrics, and use the extra data as a consistency check (or just throw it away) than to tell people how your model works. Its predictive power may well decline markedly if people know how it works.

Of course, you need a regulatory framework around this so that models which try to measure, for instance, race, are banned, but that does not require model transparency. Sometimes it really is better to keep the model as a black box.

Quote of the day January 2, 2012 at 6:54 pm

Happy New Year to you all.

Let’s start with a nice statement of the difficulty of writing good rules from the Streetwise Professor:

The hard thing is to design regulations that balance between micro incentives (to control opportunism) and macrostability.

Let’s hope that balance is achieved a little better this year than it was last.

Now go and read the full article: it is worth it.

Planes not bridges December 10, 2011 at 7:30 am

If I had to pick an unconventional member for the financial stability board, I would seriously consider an aircraft safety expert. Let me explain.

Civil engineers know about one kind of safety; the safety of bridges and such like. The crucial thing about a bridge for our purposes is that the elements of it don’t change their nature when you change the design. They might change their role – whether then are in compression or tension say – but their physical properties are constant.

Two planes that kept us safe

Aircraft safety adds an element that civil engineers don’t have to worry about (much) – people. People react to the situation they find themselves in. They learn. Importantly, they form theories about how the world works and act upon them. Thus aircraft accidents are often as much about aircrew misunderstanding what the plane is telling them as about mechanical failure. The system being studied reacts to ‘safety’ enhancements because the system includes people, and hence those enhancements may introduce new, hard to spot error modes.

The report into the Air France 447 crash is an interesting example of this. See the (terrifying) account in Popular Mechanics here. As they say in their introduction to the AF447 Black Box recordings:

AF447 passed into clouds associated with a large system of thunderstorms, its speed sensors became iced over, and the autopilot disengaged. In the ensuing confusion, the pilots lost control of the airplane because they reacted incorrectly to the loss of instrumentation and then seemed unable to comprehend the nature of the problems they had caused. Neither weather nor malfunction doomed AF447, nor a complex chain of error, but a simple but persistent mistake on the part of one of the pilots

AF447 was, by the way, an Airbus 330, a plane packed to the ailerons with sophisticated safety systems. Not only didn’t they work, the plane crashed partly because of the way to pilots reacted to their presence.

Aircraft risk experts understand this kind of reflexive failure whereby what went wrong wasn’t the plane or the pilot but rather a damaging series of behaviors caused by the pilot’s incomplete understanding of what the plane was and wasn’t doing. This is often exactly the type of behaviour that leads to financial disasters. Think for instance of Corzine’s incomplete understanding of the risk of the MF Global repo position.

Another thing aircraft safety can teach us is the importance of an open, honest post mortem. Despite the embarrassment caused, black box recordings are widely available, at least for civil air disasters. (The military is less forthcoming, although things often leak out eventually – see for instance here for a fascinating account of the Vincennes disaster.) In contrast, we still don’t have FSA’s report on RBS, let alone a good account of what happened at, to pick a distressed bank more or less at random, Dexia. UBS is a beacon of clarity in an otherwise murky world.

It is hard to learn from mistakes if you don’t know many of the bad things that happened and what the people who did them believed at the time. Finance, like air safety, is epistemic: to understand it, you have to know something about what people believe to be true, as that will give some insight into how they will behave in a crisis.

The more I think about this, the more I think risk managers in other disciplines have to teach us financial risk folks.

How wrong am I about HFT? July 2, 2011 at 1:30 pm

Doug very kindly made a detailed comment to my post about optimal levels of friction in markets which I have been meaning to reply to for a while.

Broadly my take on HFT is that it produces poor quality liquidity – there when you don’t need it and gone when you do – and that if the predominance of trading is bot vs. bot at high frequency, then the dynamics of the market can change in bad ways, witness the flash crash. Doug makes me think twice, though, about some of this, so let’s look at some of what he has to say.

Well if you’re looking for academic literature that tries to identify what’s the best point between very frictional markets and HFT, you might first want to find academic literature that confirms your belief that HFT is bad to begin with. On this front most acaedmic studies tend to find 1) HFT broadly reduces trading costs, 2) HFT increases market liquidity, 3) HFT reduces intraday volatility by filtering trading noise.

It is (3) that I find surprising. It’s relatively straightforward to find definitions of trading costs and liquidity such that (1) and (2) work. I would argue that HFT has reduced trade sizes and decreased liquidity/increased costs for block trades, especially combined with the move to trading the VWAP rather than brokers taking on blocks as a risk trade, but (3) really gives me pause for thought. Is it true?

Well, it depends. If you look at short term (a few seconds) big swings, then HFT has clearly made things worse. See here.

Moreover, HFT activity is correlated with volatility: see here.

Finally HFT seems associated with an increase in autocorrelation of stock returns: see here.

Even without this, there are reasonable concerns that (in the words of the Bank of Canada Financial Stability Review):

Some HFT participants to overload exchanges with trade messaging activity; use their technological advantage to position themselves in front of incoming order flow, making it more diffcult for participants to transact at posted prices; or withdraw activity during periods of pricing turbulence

Then of course, turning back to Doug, …

… there’s the Flash Crash. It’s hard to determine though what the total amount of economic damage from it was. Arguably the August 2007 quant meltdown disrupted more economic activity by causing less displacement from fair value but over a more prolonged time period. So it’s hard to tell if the more frictionless HFT is more disruptive than older inter-day stat arb (which to a large degree it’s supplanted).

Overall HFT firms hold very small portfolios relative to their volume, because of very high turnover. E.g. a typical first-tier HFT group might run 5-10% of US equity volume while having maximum intraday gross notional exposure of $1 bn or less (with much less overnight). Even if 5 HFT firms with perfectly correlated positions simultaneously liquidated their portfolios, it would generate less order flow than the unwinding of a mid-sized hedge fund. The fat finger order that catalyzed the flash crash (75k ES contracts), was simply too large a position to be held by any HFT desk.

Which isn’t to say that the changes wrought by HFT and electronic trading didn’t have anything to do with it. Once the market gets used to a certain level of liquidity it becomes very painful to take it away. Traders will continue to try trade at the same order sizes while market makers provide much less liquidity. The same order flow magnitude will lead to outsized market impact and extreme swings. This is clearly what happened in the flash crash, when quoted size per level on many names went from quantities of 10,000 to 50 or less. Clearly this is less of a problem in an old-school dealer or specialist market.

if the market makers stop or hold off on quoting people on the phone for 90 seconds to figure out what if anything is wrong with P&G it doesn’t lead to market panic. But if electronic market makers pull their quotes for 90 seconds, a lot of people are going to keep trading through those thin quotes and you’re going to get insane $0.50 trades hitting the tape. Of course everyone responds to that and panics, potentially triggering margin calls, etc.

Exactly. My recipe for HFT is not to withdraw it, but to reduce the impact of its speed by (1) requiring all market participants to trade on a central order book – no dark pools and (2) requiring all quotes to be good for a minimum of half a second. This would affect real trading activity very little while completely wrecking the high frequency strategies that generate flash crash risk.

Doug makes an interesting claim though:

However given that the new paradigm of electronic trading has only failed to deliver the liquidity expected of it by the broad market for 20 minutes in the six years since Reg NMS I’d say that it’s a fairly good record. That still means 0.99995% of the time investors reaped much lower transaction costs. And basically unless you yourself were either A) an intraday trader, or B) a levered investor whose broker used intraday positions to calculate margin, you were unaffected. The buy-and-hold retail investor doesn’t care if the P&G he owns temporarily goes to 0 for 3 minutes.

Well, it was more like 30 minutes I think, but anyway the claim deserves analysis. If HFT really lowered bid/ask spreads, then perhaps a once in six year flash crash is an acceptable price to pay for that. Scary though the flash crash was, Doug is right, it did very little direct economic damage. One might argue that it did quite a bit of indirect damage in reducing confidence in markets, but that is a pretty woolly claim. No, Doug’s point is a reasonable one and it deserves further analysis.

I’d say the stronger argument against frictionless markets in general and HFT specifically is that it’s social benefit is small relative to its private profits. If you consider the actual purpose of the financial markets, to allocate capital efficiently, a good metric is the total amount of dollars you add to your portfolio’s return. HFT firms earn high profits relative to this because their low risk and high returns allow them to basically need no outside capital. 100% of the trading PnL beats the standard hedge fund 2/20 or the long-only <1% management.

...The real cost of decreasing market friction is that it makes these high return, low risk, low capacity strategies feasible and directs investing resources and talent away from more constructive pursuits.

Hmmm, yes, I agree, this is a pretty abstract (if enormously profitable) game that many of our best and brightest are involved in. It does seem bizarre that we tolerate a market infrastructure that allows HFT players to extract such high profits so reliably, while paying rather little back and while tying up so many clever people on something essentially useless. If HFT profits were taxed at, say, 75%, then I would feel a lot better about it. But they aren’t, and probably it is politically much easier to change markets so that HFT profits are lower than to impose sufficient taxes and/or capital requirements to bring them into line. (There’s a idea: a capital requirement proportional not to your risk position but to the number of trades you do… I like it…)

In any event, though, it is clearly worth asking the question ‘what are the costs and benefits of HFT?’. It’s complicated, with a significant amount of evidence on both sides, but consideration of the sheer profitability of HFT must weigh large in any answer.

Incentives for equality October 19, 2010 at 6:06 am

A typically well observed post on Jonathan Hopkin’s blog reminds me of something I have been meaning to blog about for a while – income inequality. The evidence here is, if not absolutely unambiguous, then pretty persuasive. Higher income inequality is bad for society. So what’s to be done?

I know this is simplistic, but for me it is worth starting with the observation that the current version of capitalism tends towards inequality. In other words, if you are rich, getting richer is comparatively easy. If you are poor, getting richer is a lot harder. And in the middle, well, the middle.

(The reason this post is so delayed is that I had planned to gather some evidence for this assertion but, well, you know, it is only a blog. It feels true. Let’s run with it, anyway.)

It seems to me that this is the best justification for a progressive tax system that actually works. Not only should the rich pay more, they should pay proportionally more to fix the market failure whereby it is easier for them to make money. Of course the prospect of a government that is not in thrall to wealth is remote, but one can but hope.

Update. From that quintessential class act, Felix Rohatyn, talking about the US (but he might as well be talking about the UK):

“No matter what anybody says, there is a maldistribution of wealth in this country that I think is very unhealthy,” he said, leaning back in his chair. “It’s very easy to fall into the mode of saying, well this whole thing is casinos and paper money.” But he added, “I don’t think everyone in the financial community is a rogue — it’s just that that’s the way the world is.”

Quote of the day September 21, 2010 at 6:06 am

“We need to … recognise, that in finance and economics, ill-designed policy is a more powerful force for harm than individual greed or error.”

Adair Turner, Chairman of FSA.

What is, and isn’t possible with capital rules August 29, 2010 at 6:23 pm

Yves Smith at Naked Capitalism was kind enough to refer to some remarks I and the Streetwise Professor had made about Basel III. That got me thinking about what you can hope to achieve with any set of rules for internationally active banks.

First, some general points:

  • The Basel II rules are far too complex. I think I finally lost it when I got to the section on early amortisation provisions for ABCP conduits. (Hang on, I said to myself, they know about conduits – a reg cap arb device – and instead of banning them, they write rules to distinguish slightly better from slightly worse ones? You what?) Therefore I’d set an arbitrary limit, 100 pages say, and require the rules to be no longer than this, ever.
  • Basel is meant to be for internationally active banks. Not for hedge funds, not for investment managers, not for corporate finance advisers. The sooner the European Commission picks up on this and implements Basel not, as currently, for tens of thousands of firms, but instead for the twenty or thirty financial institutions with more than $100B of assets in the EU, the better. Everyone else is much less likely to be systemically important, and anyway have different businesses. Write rules that suit these different types of firms: don’t impose rules designed for BofA/Deutsche/HSBC on everyone in the name of the level playing field.
  • Stop fighting the last war. The number of rules or proposed rules that address what happened to AIG is absurd, for instance. You might as well ban all capital markets participants whose names begin with A and have done with it.

Given this, what can we support in Basel III?

  • Capital = net tangible equity. The redefinition of regulatory capital to be based on core tier 1, or something similar, makes sense.
  • Gone Concern Capital. A mechanism which would allow banks to be recapitalised by converting sub debt into equity would also clearly enhance financial stability.
  • Leverage. A backstop leverage ratio provides a useful mechanism to ensure that if risk based capital rules are wrong, the resulting distortions cannot become too large. Personally I would make it inversely proportional to asset size, so the bigger you get, the lower the leverage you are required to have.

These parts of Basel III, then, are reasonable. But beyond that, what can we do with capital, and what can’t we?

  • Capital requirements can make the financial system safer. There is no doubt that better capitalised firms are safer, all things being equal, than less well capitalised ones.
  • However, capital is not the only tool. Indeed, there is a sense in which it is the last tool, in that if you need it, it’s a bit late. Good risk management and plentiful liquidity are vital too – and it is typically a lack of these, rather than a lack of capital, that causes firms to fail.
  • Broadly, risk sensitive capital requirements are bettter. This is so obvious that it hardly needs explaining. However,once you definite a particular notion of risk, there are issues. It may be feared for instance that firms might take risk in ways that are not captured by the notion chosen, or that they might concentrate on that notion at the expense of others. The answer here is not to keep on making capital requirements more complicated until the arbitrages become hard to find, but rather for supervisors to actually understand firms’ risk taking, and to fix any obvious flaws in risk management or in capital via pillar 2.
  • Thus, I wouldn’t write thousands of rules. Instead I would say something like ‘Firms must have sufficient capital to cover the losses which might be expected in a one in a hundred year financial crisis. They must be able to demonstrate that this is so, and the full details of this demonstration must be published on a quarterly basis.’ This, in Krugman’s terminology, would be a greek rather than a roman rule.
  • All of this would mean that there are distortions. Bank A would have more capital for a given activity than Bank B. But that happens already – in part due to the use of internal models. But at least rather than getting false comfort from hundreds of pages of rules, supervisors, investors, counterparties and analysts would know that they had to analyse bank’s risk disclosures and capital calculations carefully.

Crowded trades in capital arb August 20, 2010 at 6:06 am

The Streetwise Professor points out something that I had not realised about regulatory capital arbitrage: not only do regulatory capital arbitrage opportunities blunt the impact of regulation, but they also produce crowded trades. By definition all the banks who engage in these trades are one way round, while all their counterparties are the other. This kind of situation often ends badly, so it does encourage me to renew my call for supervisors to set up Regulatory Capital Arbitrage groups to look at these opportunities. The only difficulty would be to stop the good ones turning themselves into hedge funds…

The great regulatory capital game – an experiment in crowd sourcing policy July 19, 2010 at 6:06 am

Here’s something I would like to do – it is far too much work for me (or I suspect less than a team of 30) to actually do, but never mind that, let’s just run with it.

First, build a mini model of the banking system as a set of autonomous agents. You’ll need a variety of banks and brokers, securities markets and lending, central banks and monetary policy, treasury activities and trading, investment managers and hedge funds. The simulation does need not be hugely complex: a few different securities will probably do for instance, but prices should be set by real market activity, and there should be analogues of government bonds and corporate bonds. You will need the interbank markets, too, with credit risk being taken in a variety of ways. Financial institution bankruptcy can happen due to either liquidity or solvency crises, and if a financial firm goes bankrupt, its portfolio is sold to the market. Demand for credit is set by the economic cycle, and there are fundamentals bubbling along too with random defaults of entities issueing bonds and taking loans.

Next, set some rules for the banks and brokers. We can start with the current regulatory capital rules. Banks will have a capital structure with both a term structure of debt and equity, and they will have to capitally adequate at all times. The same goes for brokers, but they can have different reg cap rules in general to model the SEC vs. FED divide.

Now the game. There are two classes of players. The first class is the bankers: they define trading rules for an individual bank. They can’t dictate transactions; rather, they write rules which determine what transactions a bank will do, depending on market conditions. There can be as many bankers as there are banks, but balance sheet size and initial capital is allocated randomly to players at the start of the game subject to plausible distributions.

The game proceeds by the simulation being run through time; this is then repeated many times. The banker’s payoff is the average of the positive part of the bank’s profit averaged across all the simulations. So, like the real world, these guys score higher if their banks make a lot of money in a variety of conditions.

The second class of players is the regulator. This player rewrites the rules that the banks must obey. Their score is based on the number of bankruptcies and both the volatility and level of credit supply: basically they score highly if there are no bank failures and credit grows slowly but steadily.

With sufficient (= a lot of) computing power, you could have a number of people playing as regulators, each simultaneously facing all the bankers. You could even use genetic algorithms or any other adaptive strategy you like as the regulator. It would be fascinating to see what rules emerged as winners.

There is a lot more you could do, too. For instance, you could impose a change on fundamentals and see what happened. You could road test new rules and see how players game them. You could with a bit of work find out what market dynamics lead to most bankruptcies, or the biggest systemic crises. You could see what bank strategies are most profitable but lead to high tail risk. It might not be a popular as world of warcraft, but I bet if you got the user interface slick enough, quite a few financial services people would play, and all that expertise could be used to improve the capital rules. The key point is that even if you don’t believe the results of the simulation are realistic, having something that suggests financial system vulnerabilities on the basis of actual dynamics and attempted gaming of the system could be quite useful.

Why we do crazy things July 11, 2010 at 11:07 am

Rajiv Sethi makes an important and subtle point in a post on Naked Capitalism. He is discussing the behaviour finance literature, and in particular the idea that failure to correctly estimate the probability of bad outcomes leads to the design of unsafe securities that look safe:

…what troubles me about this paper (and much of the behavioral finance literature) is that the rational expectations hypothesis of identical, accurate forecasts is replaced by an equally implausible hypothesis of identical, inaccurate forecasts. The underlying assumption is that financial market participants operating under competitive conditions will reliably express cognitive biases identified in controlled laboratory environments. And the implication is that financial instability could be avoided if only we were less cognitively constrained, or constrained in different ways — endowed with a propensity to overestimate rather than discount the likelihood of unlikely events for example.

Now this is a little unfair in that the authors don’t make the explicit read across from ‘if people are wrong about the likelihood of crashes, then they produce overpriced securities which will fail catastrophically in a crisis’ to ‘overpriced securities which failed catastrophically in a crisis were produced, therefore people mis-estimated tail probabilities’. But certainly the authors invite such a reading, so Rajiv’s comment is reasonable. It is next part of his argument that really resonates though:

This narrowly psychological approach to financial fragility neglects two of the most analytically interesting aspects of market dynamics: belief heterogeneity and evolutionary selection. Even behavioral propensities that are psychologically rare in the general population can become widespread in financial markets if they result in the adoption of successful strategies. As a result, asset prices disproportionately reflect the beliefs of investors who have been most successful in the recent past. There is no reason why these beliefs should consistently conform to those in the general population.

I think that this is right, and it deserves to be better understood. I would even go further, because this argument neglects the explicitly reflexive nature of market participant’s thinking. (Call it social metacognition if you really want some high end jargon.) Traders can both absolutely understand that a behavioral propensity is rare and likely to lead to catastrophe and behave that way: they do this because they believe that other market participants will too, and behaving that way if others do will make money in the short term. Even if you think that it is crazy for (pick your favourite bubblicious asset) to trade that high, providing you also believe others will buy it, then it makes sense for you to buy it along with the crowd. Moreover, worse, you may well believe that they too think it is crazy: but all of you are in a self-sustaining system and the first one to get off looks the most foolish (for a while). Most people are capable of spotting a bubble if it lasts long enough: the hard part is timing your exit to account for the behaviour of all the other smart people trying to time their exit too.

Freeland sensibilities July 3, 2010 at 7:20 am

As Reuters correspondents go, Chrystia Freeland is sensible, certainly more so than some of her fishy colleagues. She recently pointed out the importance of systemic weakness rather than people in the Crunch, a point I have been making since it began:

Blaming the crisis on human error is a lot easier than trying to work out the systemic problems it laid bare… But just because something is easy doesn’t make it accurate.

In particular she points out that Chuck Prince actually had a point in his much-derided “as long as the music is playing, you’ve got to get up and dance. We’re still dancing.” remark:

What’s really unsettling about Prince’s observation is not that he was wrong, but that he was right… Peter Weinberg [said] “It’s very, very hard to lean against the wind in a bubble. … If one of the heads of the large Wall Street firms stood up and said, ‘You know what, we’re going to cut down our leverage from 30 to one to 15 to one, and we’re not going to participate in a lot of the opportunities in the market’ — I’m not sure that chief executive would have kept his job.”

This is entirely on point. In the main, investment bankers did their job. OK, there were some clear errors of judgment, perhaps some fraud, perhaps some inadequacies of disclosure. But the big question, the question that must be addressed if we want a safer financial system, is ‘why did the system make it their job to do these things?’ Ms Freeland points out

Not only is betting against an asset bubble dangerous — buying into it can be smart.

She then references a 2003 Brunnermeier and Abreu Econometrica article which you can find here. This of course leads to the vexed and complicated question of macroprudential regulation, aka anti-cyclical regulation: how can we stop it being so attractive for firms to inflate bubbles. We are still taking baby steps in this area. But it is this, rather than throwing stones at individuals, that will make the next crisis less likely.

Practical procyclicality April 27, 2010 at 7:46 pm

There is a noticeable piece of terminological gymnastics that commentators engage in when discussing regulatory measures. If they are in favour of something, they call it risk sensitive. If they are against, they call it procyclical. Now, not all risk sensitive measures are procyclical and vice versa, but the connection between them is strong, and the tension is unavoidable.

This is particularly so as there is a paucity of satisfactory solutions to modifying current regulatory arrangement to make them less procyclical. The ‘Spanish’ dynamic provisioning scheme is one suggestion, but this only addresses expected loss provisions not capital, and it is anyway fraught with difficulties, many accounting-related. (Some firm’s accounting is focussed only on provisions for loans which are already uncollectable, in some sense; others provision for expected future losses which have not yet occurred: many mix the two. The Spanish proposal essentially allows for a flow from the EL provision into the incurred provision in bad times, and forces higher EL provisions in good ones, but it only works if you have both types of buffer.) Clearly if one could identify the place in the cycle then one could set an anticyclical capital buffer, for
instance requiring a 10% Basel ratio at the top of the cycle vs. a 6% one at the bottom. But the difficulty is knowing where one is. It seems (and this is anecdotal – I will try to find a reference) that the ratio of credit growth to GDP growth is a reasonable way of identifying the upswing in the economic cycle, but that it is less helpful for spotting a crisis. One could perhaps devise a methodology involving various stress indicators such as the price of credit (i.e. bond and CDS indices), the price of liquidity (i.e. the spread between interbank and government rates), market volatility indicators (such as the VIX) and country risk. But the model risk is considerable.

Subjective judgements are also problematic. Imagine the impact on confidence if the financial stability board officially announces that we are in a crisis and so bank capital ratios have been cut by 2%. A clearer signal to stop lending in the interbank and repo markets is difficult to imagine.

Another problem with risk sensitivity is that, like any measure which discriminates, well, it discriminates. Specifically it makes credit more expensive for borrowers which are perceived as riskier. If you want to ensure that access to credit is broad, and that the price of credit does not vary too much over time, then that is problematic. Certainly some governments do want to achieve this end, claiming that it is a societal good. In that case, having ever more risk sensitive capital requirements gets them further from their goal, not closer.

Of course, the argument in the other direction is forceful. If capital requirements are not risk sensitive, then some trade or other will be incentivised: in the case we suggest, it will be better to lend to low credit quality companies and worse to lend to AAA corporates. If one has a bias at all, that one makes sense, as high quality companies have access to the bond markets, whereas low quality ones often do not. However the subprime crisis demonstrates the dangers of making credit too cheap for bad borrowers, so more generosity is not necessarily better.