Category / Emergence

Bubble me do January 17, 2014 at 11:17 am

‘Bubbles’ are much in debate – by Gavyn Davies here and here, Daoud & Diaz here, Fama here, Stein here, and so on.

A key issue, obviously, is what is a bubble. The Brunnermaier definition is

Bubbles are typically associated with dramatic asset price increases followed by a collapse. Bubbles arise if the price exceeds the asset’s fundamental value. This can occur if investors hold the asset because they believe that they can sell it at a higher price to some other investor even though the asset’s price exceeds its fundamental value.

I think this is a terrible definition. There are two big things wrong with it:

  • It implies that an asset price rise is only a bubble if it is followed by a collapse, in other words that, by definition, slow bubble deflation is impossible.
  • It assumes that there is a single ‘fundamental’ price which we can measure deviations from. In reality of course ‘fundamentals’ are just as socially constructed as market prices, and just as arbitrary. (I have used the catchphrase ‘prices are a Schelling points’ in the past.)

We can’t reasonably hope to address the financial stability implications of bubbles until we have a better definition of what a bubble is.

Going loopy with the SEC May 30, 2013 at 8:19 pm

Thanks to Matt Levine, I have this lovely story of the Facebook IPO on Nasdaq. Matt points us at the SEC account of that dismal day for NASDAQ here. The first few pages are hilarious:

In a typical IPO on NASDAQ, shares of the issuer are sold by the IPO’s underwriters to participating purchasers at approximately midnight and secondary market trading begins later that morning. Secondary trading begins after a designated period – called the ‘Display Only Period’ or ‘DOP’ – during which members can specify the price and quantity of shares that they are willing to buy or sell (along with various other order characteristics), and can also cancel and/or replace previous orders. The DOP usually lasts 15 minutes…

At the end of the DOP, NASDAQ’s “IPO Cross Application” analyzes all of the buy and sell orders to determine the price at which the largest number of shares will trade and then NASDAQ’s matching engine matches buy and sell orders at that price…

NASDAQ’s systems run a ‘validation check’ which confirms that the orders in the IPO Cross Application are identical to those in NASDAQ’s matching engine. One reason that the orders might not match is because NASDAQ allowed orders to be cancelled at any time up until the end of the DOP – including the very brief interval during which the IPO Cross Application was calculating the price and volume of the cross. If any of the orders used to calculate the price and volume of the cross had been cancelled during the IPO Cross Application’s calculation process, the validation check would fail and the system would cause the IPO Cross Application to recalculate the price and volume of the cross.

This second calculation by the IPO Cross Application, if necessary, incorporated only the first cancellation received during the first calculation, as well as any new orders that were received between the beginning of the first calculation and the receipt of that first cancellation. Thus, if there were multiple orders cancelled during the first IPO Cross Application’s calculation, the validation check performed after the second calculation would fail again and the IPO Cross Application would need to be run a third time in order to include the second cancellation, as well as any orders received between the first and second cancellations.

Because the share and volume calculations and validation checks occur in a matter of milliseconds it was usually possible for the system to incorporate multiple cancellations (and intervening orders) and produce a calculation that satisfies the validation check after a few cycles of calculation and validation. However, the design of the system created the risk that if orders continued to be cancelled during each recalculation, a repeated cycle of validation checks and re-calculations – known as a ‘loop’ – would occur, preventing NASDAQ’s system from: (i) completing the cross; (ii) reporting the price and volume of the executions in the cross (a report known as the “bulk print”); and (iii) commencing normal secondary market trading.

This is precious in so many ways: and I am sure that you can guess what happened next. Don’t you love the SEC telling us what a loop is? But lolz aside, it does suggest that IT systems written for the pre-HFT era are not necessarily fit for purpose today.

Healing circuits March 29, 2013 at 12:51 pm

I have endured a couple of talks recently on the use of network methods in financial stability analysis. While the general idea is interesting, the specific applications struck me as dubious in the extreme. So it was with some relief that I read something about robustness that was useful and impressive – albeit with no finance connection. Science Daily reports:

a team of engineers at the California Institute of Technology (Caltech)… wanted to give integrated-circuit chips a healing ability akin to that of our own immune system — something capable of detecting and quickly responding to any number of possible assaults in order to keep the larger system working optimally. The power amplifier they devised employs a multitude of robust, on-chip sensors that monitor temperature, current, voltage, and power. The information from those sensors feeds into a custom-made ASIC unit on the same chip… [This] brain analyzes the amplifier’s overall performance and determines if it needs to adjust any of the system’s actuators…

[The ASIC] does not operate based on algorithms that know how to respond to every possible scenario. Instead, it draws conclusions based on the aggregate response of the sensors. “You tell the chip the results you want and let it figure out how to produce those results,” says Steven Bowers… “The challenge is that there are more than 100,000 transistors on each chip. We don’t know all of the different things that might go wrong, and we don’t need to. We have designed the system in a general enough way that it finds the optimum state for all of the actuators in any situation without external intervention.”

Self-Fulfilling Global Panics… November 1, 2012 at 8:28 am

… by Philippe Bacchetta and Eric van Wincoop in the Monetary Authority of Singapore Macroeconomic Review is worth reading. You can find it here. (HT FT Alphaville, Bloomberg)

The key point:

Macroeconomic fundamentals have a dual role in our theory. One is a standard role, where, for example, a deterioration of a fundamental variable reduces expectations of future firm earnings and dividends, which lower the asset price. The other role is one of generating self-fulfilling shifts in perceived risk.

This happens in a way that is entirely disconnected from the fundamental role of the macro variable. This is perhaps easiest to understand when thinking of the variable as a pure sunspot, i.e. one that has no fundamental role at all. When investors believe that asset price risk (uncertainty about the asset price tomorrow) depends on the sunspot, and act on those beliefs by selling the asset when risk increases, then the price will depend on the sunspot as well. This suggests that tomorrow’s price depends on the sunspot tomorrow. This in turn implies that asset price risk depends on uncertainty about the sunspot tomorrow. The latter will, in general, depend on the current level of the sunspot, making the perceived dependence of risk on the sunspot self-fulfilling.

Prices as Schelling points indeed.

Encouraging diversity April 16, 2012 at 7:48 am

Charles Goodhart and Wolf Wagner have an interesting but flawed idea which harks back to a recent post of mine:

…[The] lack of diversity [among financial institutions] is very costly for society. Similar institutions are to encounter problems at the same time. This makes systemic crises – such as the crisis of 2007-2009 – more likely…

… [The] tendency for the financial system to become excessively homogenous provides a rationale for regulation that encourages diversity. But what form should such regulation take?

We believe that imposing direct restrictions on the activities of financial institutions is not a promising avenue. As argued above, diversity arises through many and very different channels. Such regulation would thus be very complex and burdensome. An additional issue is that diversity cannot be easily quantified. For example, it would be an onerous task for regulators to measure similarities in bank funding structures – not to speak of similarities created by counterparty risks.

We instead advocate an approach where financial institutions will be subjected to capital requirements that condition on how correlated their overall activities are with the rest of the financial system. This may be in the form of a surcharge on existing capital requirements or, preferably, a redefinition of current risk weights that keep average capital requirements unchanged.

The problem with this is that it makes it impossible to manage your capital. You don’t know what your charge will be because you don’t know what other institutions are doing. Moreover, measuring correlation is very hard: Goodhart and Wagner suggest using equity return correlations or relative correlations, but equity is call on asset value struck at the face value of debt, so equity/equity correlations relate non-linearly to asset value correlations. The latter though are not observable.

What you want is some approach that incentises strategic diversification. Strategic change takes a while though; you can’t suddenly turn Goldman Sachs into HSBC or vice versa even if you wanted to. Even if Goldman wanted an emerging market retail banking franchise, for instance, it would take them years to acquire it and figure out how to run it.

Goodhard and Wagner have the right aim but the wrong way of getting there. Their proposal doesn’t allow for the difficulty of changing your correlation with the rest of the system, nor for the importance of a clear target level of capital that does not vary with unhedgable factors. This idea deserves some more thought.

Local vs global optimization for corporations April 5, 2012 at 6:34 am

Doug had a very interesting comment to my post about evolutionary diversity and banking. I’ll set up the problem, then quote some of his comment and try to give my spin on his questions.

Essentially we are concerned with an unknown fitness landscape where we are trying to find the peaks (best adapted organisms or most profitable financial institutions) based on changes in makeup (genetics or business model). The landscape might have one peak, or two, or twenty seven. The peaks might be similar height, or they might be wildly different; the local maxima may or may not be close to the global maxima or they might not. Moreover, you only have knowledge of local conditions. The question is how you optimize your position in this landscape.

This is related to the topic of metaheuristics… A typical scenario would be a doing a non-linear regression and finding the model parameters that maximizes fit on a dataset. In the scenario there’s no analytical solution (e.g. in linear regression) so the only thing you can do is successively try points [to see how high you are] until you exhaust your resource/computational/time limits. Then you hope that you’ve converged somewhere close to the global maximum or at least a really good local maximum.

The central issue in metaheuristics is the “exploitation vs exploration” tradeoff. I.e. do you spend your resources looking at points in the neighborhood of your current maximum (climbing the hill you’re on)? Or do you allocate more resources to checking points far away from anything you’ve tested so far (looking for new hills).

One of the most reliable approaches is simulated annealing. You start off tilting the algorithm very far towards the exploration side, casting a wide net. Then as time goes on you favor exploitation more and more, tightening on the best candidates.

Simulated annealing is good for many of these kinds of problems; there are also lots of other approaches/modifications. A couple of things to note though; there is no ‘best’ algorithm (ones that are good on some landscapes tend to fail really badly on others, while those that do OK on everything are always highly suboptimal compared with the best approach for that terrain); moreover, this class of problems for arbitrary fitness landscapes is known to be really hard.

In what follows, I’ve taken the liberty of replacing the terms ‘exploration’ and ‘exploitation’ for ‘wide ranging exploration’ and ‘local exploration’ as I don’t think ‘exploitation’ really captures the flavour of what we mean. Back to Doug:

I believe the boom/bust cycle of capitalism operates very much like simulated annealing. Boom periods when capital and risk is loose tend to heavily favor wide ranging exploration (i.e. innovation). It’s easy to start radically new businesses. Bust periods tend to favor local exploration (i.e. incremental improvements to existing business models). Businesses are consolidated and shut down. Out of those new firms and business strategies from the boom periods the ones that proved successful go on to survive and are integrated into the economic landscape (Google), whereas those that weren’t able to establish enough of a foothold during the boom period get swept away (Pets.com).

All of this is tangentially related, but it brings up an interesting question. Most of the rest of the economy (technology is particular) seems to be widely explorative during boom times. Banking in contrast seems to be locally explorative even during boom times, i.e. banking business models seem to converge to each other. Busts seem to fragment banking models and promote wider exploration.

So why is banking so different that the relationship seems to get turned on its head?

The cost of a local vs. global move is part of it. Local moves are expensive for non-financials, almost as expensive as (although not as risky as) global moves. That makes large moves attractive in times when the credit to finance them is cheap. When credit is expensive and/or rationed, incremental optimization is the only thing a non-financial can afford to do.

It’s different for many financials however. The cost of climbing the hill – hiring CDO of ABS traders – is relatively small compared to the rewards. Moreover there is more transparency about what the other guys are doing. Low barriers to entry and good information flow make local maximisation attractive. To use the simulated annealing analogy, banking is too cold; there isn’t enough energy around to create lots of diversity.

And is this a bad thing for the broader economy, and if so why?

I think that it is, partly because the fitness landscape can change fast and leave a poorly adapted population of banks. Also, there are economies of scale for financial institutions and high barriers to entry for deposit takers and insurers (if not hedge funds), so there are simply not enough material size financial institutions. It is as if all your computation budget in simulated annealing is being spent exploring the neighbourhood of two or three spots in the fitness space.

A large part of the answer, it seems to me, is to make it easier to set up a small bank and much harder to become (or remain) a very large one.

Holding back evolution March 30, 2012 at 7:26 am

Hans asked a good question in comments to a prior post:

when you start off with 40 medium size banks, eventually a few will have a better business model than the others. And then the business model gets copied (due to shareholders seeing the return at the more successful banks and wanting the same) which leads to a convergence to 40 banks with (more or less) the same business model. Basically what we saw in the run-up to the financial crisis. After which the take-overs can begin due to economy of scale.

In other words: I agree that ‘evolution’ thrives on diversity, but how do you prevent convergence to one (or 2/3) business models?

I have to say that that one has me stumped for now. The fitness landscape changes fast for banks, so rapid change (what an evolution biologist would call Saltation) is the norm. If we let evolutionary pressure bear on a diverse set of creatures in a fitness landscape with a single peak – a single business model – the ones that don’t climb the peak aren’t very successful. So do we have to imagine legislators coming in like comets every 50 years and imposing diversity again? That’s pretty depressing.

The problem is the premise: a single-peaked fitness landscape. Diversity is encouraged when there are lots of local maxima in the fitness landscape. We need, in other words, to make sure that lots of banking different models are acceptably profitable. There are two ways to do this of course: lifting up the little guys (aka the wildlife sanctuary approach) or crushing the big guys (aka a cull). To your elephant guns, gentlemen.

High frequency Haldane July 8, 2011 at 3:01 pm

Sorry, this is not the HFT blog, really it is not, but I can’t resist a quick copy/paste from the Bank of England on the latest speech by Andy Haldane:

…A second task is to rethink the design of trading infrastructures. Regulators in the US and Europe are in the process of doing that. One proposal is to require a commitment by market-makers to provide liquidity, whatever the state of the market. The difficulty appears to be in specifying these commitments in a precise enough fashion to make them credible.

Circuit-breakers are a second potential solution. They already exist on US and European exchanges. By calling a halt to trading, circuit-breakers provide a means of establishing a level informational playing field for all traders. The changing landscape of trading, both in speed and structure, has strengthened the case for such circuit-breakers. To be effective, however, they need to span all trading exchanges and platforms, as has recently been done in the US.

A more ambitious proposal still would be to impose a speed limit on trades at all times – so-called minimum resting periods. This would forestall the race to zero. It would do so by raising bid-ask spreads on average. But it would also potentially make them less variable, especially in situations of stress, improving the resilience of liquidity. In other words, there is a potential trade-off between market efficiency and stability. Historically, the regulatory skew has perhaps been towards the former objective. The new topology of trading means it may be time for that to change. As Andrew Haldane concludes: “Grit in the wheels, like grit on the roads, could help forestall the next crash”.

Here’s to grit. Have a nice weekend.

How much is a very short term option worth? July 7, 2011 at 1:48 am

Further to my last post, TSP suggests

My argument against a say 1 second minimum exposure time is based on the fact that quotes are like free options. Forcibly extending time-in-force increases the cost of that option to the quoter.

He’s right. But how much is that option worth? To answer that I used a jump diffusion model (because at the kind of timescale we are talking about, stock prices are jumps between the quoted prices). I priced a 1 cent above and below the money strangle with a 1 second expiry and calibrated it to JPMorgan’s current data using the shortest dated ATM options (7 day, currently)*. It turned out the 1 second option was worth a tiny fraction of a basis point.

So, let’s see. We can either (a) charge HFT shops a tiny fraction of a basis point for accessing the market and have a much better shot at protecting financial stability or (b) leave them alone. Um…

*The right way to do this is probably on a lattice rather than in continuous time. Also as we are not looking to hedge this option, probably the real world rather than risk free numeraire is the right one. That would allow one to calibrate to really short term observed stock price jumps, which would be better than calibration to a seven day option price. Still, I doubt it would change the price much.

Nuancing HFT July 6, 2011 at 12:18 am

The Streetwise Professor responded to my post on HFT with a number of good points.

I am skeptical that requiring trading on a central order book (CLOB) will eliminate flash crashes. Note that flash crashes have occurred on futures markets with CLOBs.

Fair enough. But with a CLOB, you can suspend all trading at once, you have some hope of definining what best execution means, and you can impose behavioural restrictions in one place. Now granted you can do all of those things with some difficulty in more diverse market infrastructures, but they are a lot harder.

I’m also skeptical that dark pools have anything to do with flash crashes, and am also not convinced that the dark pool structure–which as DEM notes, has effectively supplanted the block market for facilitating large trades–is detrimental to the interests of those who want to trade in size.

What dark pools have done is dramatically reduced the average size of trade, and forced everyone to cut even retail sized positions into multiple trades, while obscuring what the current price is, precisely. In the old market maker system, the market maker provided block trading capacity without loss of transparency to the market. My main problem with dark pools is exactly that they are dark.

I suspect I’m not too far from TSP here though, as he goes on to say

Top-of-book protection provides a way for HFTs to make money by exploiting fragmentation of liquidity. Creating a real CLOB entails a whole set of issues … but these can be avoided through the creation of a virtual CLOB that protects the entire books of the various liquidity pools.

And that, frankly, would do provided that you can also suspend trading, protect best execution, and so on (which you can, with effort).

One of the suggestions which I stand by, and which TSP doesn’t like, is a minimum quote time for orders. He says:

… restrictions on quoting–e.g., DEM’s suggestion that quotes be good for one second–could well be counterproductive in that regard. The current evidence suggests that order floor became much more toxic on 6 May 2010, and that’s why HFTs stopped supplying liquidity. Forcing them to keep their quotes good for longer than they would choose to on their own in such circumstances makes it more likely they will not quote at all.

Two things. First, very short term feedback loops are a key component of flash crash like behaviour. If the bots can’t react that fast, they can’t crash that fast. Crashes must happen on human timescales if at all so that we have the chance to intervene. The sand-in-the-machine of requiring quotes to be good for half a second is vital to preventing rapid phase changes in market behaviour: emergent behaviour is much less common in discrete than continuous systems.

Second, because you can’t put out an order for a few shares good for a millisecond, you can’t make as much money per second on a given order size. That means that you have to quote in bigger size to make money. Instead of cents per order, you have to make hundreds of dollars; hence, quotes in bigger size, which is good for market liquidity. Now, will anyone volunteer to explain ‘emergent dynamics’ to the SEC?

How wrong am I about HFT? July 2, 2011 at 1:30 pm

Doug very kindly made a detailed comment to my post about optimal levels of friction in markets which I have been meaning to reply to for a while.

Broadly my take on HFT is that it produces poor quality liquidity – there when you don’t need it and gone when you do – and that if the predominance of trading is bot vs. bot at high frequency, then the dynamics of the market can change in bad ways, witness the flash crash. Doug makes me think twice, though, about some of this, so let’s look at some of what he has to say.

Well if you’re looking for academic literature that tries to identify what’s the best point between very frictional markets and HFT, you might first want to find academic literature that confirms your belief that HFT is bad to begin with. On this front most acaedmic studies tend to find 1) HFT broadly reduces trading costs, 2) HFT increases market liquidity, 3) HFT reduces intraday volatility by filtering trading noise.

It is (3) that I find surprising. It’s relatively straightforward to find definitions of trading costs and liquidity such that (1) and (2) work. I would argue that HFT has reduced trade sizes and decreased liquidity/increased costs for block trades, especially combined with the move to trading the VWAP rather than brokers taking on blocks as a risk trade, but (3) really gives me pause for thought. Is it true?

Well, it depends. If you look at short term (a few seconds) big swings, then HFT has clearly made things worse. See here.

Moreover, HFT activity is correlated with volatility: see here.

Finally HFT seems associated with an increase in autocorrelation of stock returns: see here.

Even without this, there are reasonable concerns that (in the words of the Bank of Canada Financial Stability Review):

Some HFT participants to overload exchanges with trade messaging activity; use their technological advantage to position themselves in front of incoming order flow, making it more diffcult for participants to transact at posted prices; or withdraw activity during periods of pricing turbulence

Then of course, turning back to Doug, …

… there’s the Flash Crash. It’s hard to determine though what the total amount of economic damage from it was. Arguably the August 2007 quant meltdown disrupted more economic activity by causing less displacement from fair value but over a more prolonged time period. So it’s hard to tell if the more frictionless HFT is more disruptive than older inter-day stat arb (which to a large degree it’s supplanted).

Overall HFT firms hold very small portfolios relative to their volume, because of very high turnover. E.g. a typical first-tier HFT group might run 5-10% of US equity volume while having maximum intraday gross notional exposure of $1 bn or less (with much less overnight). Even if 5 HFT firms with perfectly correlated positions simultaneously liquidated their portfolios, it would generate less order flow than the unwinding of a mid-sized hedge fund. The fat finger order that catalyzed the flash crash (75k ES contracts), was simply too large a position to be held by any HFT desk.

Which isn’t to say that the changes wrought by HFT and electronic trading didn’t have anything to do with it. Once the market gets used to a certain level of liquidity it becomes very painful to take it away. Traders will continue to try trade at the same order sizes while market makers provide much less liquidity. The same order flow magnitude will lead to outsized market impact and extreme swings. This is clearly what happened in the flash crash, when quoted size per level on many names went from quantities of 10,000 to 50 or less. Clearly this is less of a problem in an old-school dealer or specialist market.

if the market makers stop or hold off on quoting people on the phone for 90 seconds to figure out what if anything is wrong with P&G it doesn’t lead to market panic. But if electronic market makers pull their quotes for 90 seconds, a lot of people are going to keep trading through those thin quotes and you’re going to get insane $0.50 trades hitting the tape. Of course everyone responds to that and panics, potentially triggering margin calls, etc.

Exactly. My recipe for HFT is not to withdraw it, but to reduce the impact of its speed by (1) requiring all market participants to trade on a central order book – no dark pools and (2) requiring all quotes to be good for a minimum of half a second. This would affect real trading activity very little while completely wrecking the high frequency strategies that generate flash crash risk.

Doug makes an interesting claim though:

However given that the new paradigm of electronic trading has only failed to deliver the liquidity expected of it by the broad market for 20 minutes in the six years since Reg NMS I’d say that it’s a fairly good record. That still means 0.99995% of the time investors reaped much lower transaction costs. And basically unless you yourself were either A) an intraday trader, or B) a levered investor whose broker used intraday positions to calculate margin, you were unaffected. The buy-and-hold retail investor doesn’t care if the P&G he owns temporarily goes to 0 for 3 minutes.

Well, it was more like 30 minutes I think, but anyway the claim deserves analysis. If HFT really lowered bid/ask spreads, then perhaps a once in six year flash crash is an acceptable price to pay for that. Scary though the flash crash was, Doug is right, it did very little direct economic damage. One might argue that it did quite a bit of indirect damage in reducing confidence in markets, but that is a pretty woolly claim. No, Doug’s point is a reasonable one and it deserves further analysis.

I’d say the stronger argument against frictionless markets in general and HFT specifically is that it’s social benefit is small relative to its private profits. If you consider the actual purpose of the financial markets, to allocate capital efficiently, a good metric is the total amount of dollars you add to your portfolio’s return. HFT firms earn high profits relative to this because their low risk and high returns allow them to basically need no outside capital. 100% of the trading PnL beats the standard hedge fund 2/20 or the long-only <1% management.

...The real cost of decreasing market friction is that it makes these high return, low risk, low capacity strategies feasible and directs investing resources and talent away from more constructive pursuits.

Hmmm, yes, I agree, this is a pretty abstract (if enormously profitable) game that many of our best and brightest are involved in. It does seem bizarre that we tolerate a market infrastructure that allows HFT players to extract such high profits so reliably, while paying rather little back and while tying up so many clever people on something essentially useless. If HFT profits were taxed at, say, 75%, then I would feel a lot better about it. But they aren’t, and probably it is politically much easier to change markets so that HFT profits are lower than to impose sufficient taxes and/or capital requirements to bring them into line. (There’s a idea: a capital requirement proportional not to your risk position but to the number of trades you do… I like it…)

In any event, though, it is clearly worth asking the question ‘what are the costs and benefits of HFT?’. It’s complicated, with a significant amount of evidence on both sides, but consideration of the sheer profitability of HFT must weigh large in any answer.

Slippery when wet: arbitrage channels and market efficiency June 24, 2011 at 6:13 am

How much frictional cost do you want in a market?

Just enough, obviously.

The question and answer were brought to mind by a recent speak by Myron Scholes reported in Risk magazine. He warns:

“If you restrict or require more capital of banks, what will happen is that they have to wait until the deviations [in price] get larger before they intermediate, because they have to make a return on the capital they are employing,” he said. “As intermediary services stop, markets then become more chaotic.”

Scholes is right. The all-in cost of a trade depends on its capital usage. If banks have to hold more capital, fewer trades are profitable after the cost of capital is included, so fewer trades happen. Thus markets are less well arbitraged and hence less efficient (unless other players step in, at least).

If frictional costs are very low then you get a huge amount of trading – as in HFT – and that in itself is destabilizing. But if they are too high, you get inefficiency. I would love to see some academic work on where the sweet spot is.

Meta is bad May 16, 2011 at 10:07 am

From an insightful post on the Wisdom of Crowds on Wired (HT FT alphaville):

Although groups are initially ‘wise,’ knowledge about estimates of others narrows the diversity of opinions to such an extent that it undermines” collective wisdom… certain conditions must be met for crowd wisdom to emerge. Members of the crowd ought to have a variety of opinions, and to arrive at those opinions independently… Socially influenced test subjects, however, actually became less accurate.

This is (a) not surprising and (b) a real challenge to the (much challenged) efficient market ‘hypothesis’. Knowing what other people think not only makes you worse at estimating the truth, it makes the average estimate worse. So clearly one way to dramatically improve markets is to ban all financial journalism, right?

Tainting the barrel and other inappropriate metaphors February 22, 2011 at 11:49 am

I mentioned to a colleague yesterday that game theory is a trap for clever people interested in finance. That is, it is fascinating, it seems useful, and it’s complicated and difficult. So, often, such people waste months (or even years) learning about it before discovering that actually, despite the fact that it ought to be applicable, there’s very little useful that you can do with game theory in finance that has not already been done. Many of the people who fall for this kind of trap are clever and self-motivated; there is no point in trying to advise them, they have to find out for themselves.

I really hope that Andy Haldane isn’t this type of person, but I am afraid that he is. A recent FT Op Ed piece suggests that he has fallen into a similar trap, the theory of complex systems. In my youth I orbited around this strange attractor for a bit and, to be fair, I do think that there are some valuable insights from dynamical systems theory that can – sometimes – illuminate finance. Certainly the broad analogy between financial crises and phase changes, or between the dynamics of financial system and those of a network of agents, are instructive. Let’s see what Haldane says:

Scaling up risks [in complex systems] may cause them to cascade rather than cancel out. The bigger and more complex the structure, the greater this risk. Why? Because size and complexity increase the chance of cross contamination.

Note that ‘may’. Yes, diversification can be a faux ami, there when you don’t need it and not there when you do. Yes, the interaction between funding liquidity risk and asset liquidity risk can be dangerous. You certainly don’t want to be forced to sell something when it is least liquid because you can’t fund it any more. And yes, the connectedness of financial institutions matters often just as much as their size. But to argue from the reasonable premise that the natural sciences provide ‘clues’ to better understand finance to the regulator’s universal remedy, banks must have more capital – that is the most blatant non sequitur. It may well be that systemically important institutions should be safer than they are today, but given the dreadful state of our knowledge of the dynamics of the financial system and the paucity of theory and practice in macroprudential regulation, we don’t know and we don’t have a way of finding out. A humble ‘there is much to study here’ would behove Mr. Haldane better than the use of a crude analogy to justify a badly-researched policy.

The stock market as a distributed system October 12, 2010 at 8:04 am

My PhD is in computer science, specifically the theory of distributed computation, so I naturally tend to use that metaphor. It does make sense, though, for the stock market. After all, a lot of volume is being driven by exactly that: distributed computers interacting.

In The stock market as a single, very big piece of multi-threaded software at Ars Technica, Jon Stokes makes that point rather well. Talking about the Flash Crash he says

The market did what every piece of multithreaded software eventually does in response to just the wrong mix of execution conditions and inputs: it crashed.

Now there may be complex distributed systems with asynchronous interaction that have never crashed and do what is intended. But I have never come across any. So Jon’s point is reasonable. As he says:

The market is fairly fragile, which is about what you’d expect from a giant, multithreaded computer that has been brought online, piecemeal, with no oversight. The wrong input at the wrong moment could trigger a race condition, or a deadlock, a livelock, or some other concurrency hazard that brings it all down.

Perhaps the SEC should hire some of my old colleagues from the safety critical systems community.

Systems thinking, people thinking August 25, 2010 at 8:32 pm

I was going to amuse myself this morning taking apart a truly awful Felix Salmon posting on the use of the normal distribution in finance. (That’s what it is really about – it isn’t what Felix thought it was about when he wrote it, which is part of the problem.) But instead I am going to praise an insightful article by Chrystia Freeland in the NYT.

First she highlights an important cognitive bias:

Most of us respond better to personal stories than to impersonal numbers and ideas.

Then she discusses one of the consequences:

that same bias means we are drawn to stories about people, not systems. When it comes to the financial crisis, we want heroes and villains and what-he-had-for-breakfast narratives; we are less enthralled by analytical accounts of the global financial system and the cycle of boom and bust.

Chrystia is nice enough to suggest that this is the age of the systems thinker, that those of us who can do it – and if there is one thing that this blog is about, it is systems thinking – are the new upper class. Sadly I think she is wrong. Systems thinking has the potential to be a very powerful tool, and it has had many successes. But cognitive bias means that it is always fighting an uphill battle against personality-driven narratives. Systems thinking has a marketing problem which it needs to solve before it can become the new black.

Update. This comment from Ashwin is so pertinent that I am going to hoick it up to the main text (and edit it to remove the references).

John Sterman in his book Business Dynamics says the following: “A fundamental principle of system dynamics states that the structure of the system gives rise to its behavior. However, people have a strong tendency to attribute the behavior of others to dispositional rather than situational factors, that is, to character and especially character flaws rather than the system in which these people are acting. The tendency to blame the person rather than the system is so strong psychologists call it the “fundamental attribution error”. In complex systems, different people placed in the same structure tend to behave in similar ways. When we attribute behavior to personality we lose sight of how the structure of the system shaped our choices. The attribution of behavior to individuals and special circumstances rather than system structure diverts our attention from the high leverage points where redesigning the system or government policy can have significant, sustained, beneficial effects on performance. When we attribute behavior to people rather than system structure the focus of management becomes scapegoating and blame rather than the design of organizations in which ordinary people can achieve extraordinary results.”

The great regulatory capital game – an experiment in crowd sourcing policy July 19, 2010 at 6:06 am

Here’s something I would like to do – it is far too much work for me (or I suspect less than a team of 30) to actually do, but never mind that, let’s just run with it.

First, build a mini model of the banking system as a set of autonomous agents. You’ll need a variety of banks and brokers, securities markets and lending, central banks and monetary policy, treasury activities and trading, investment managers and hedge funds. The simulation does need not be hugely complex: a few different securities will probably do for instance, but prices should be set by real market activity, and there should be analogues of government bonds and corporate bonds. You will need the interbank markets, too, with credit risk being taken in a variety of ways. Financial institution bankruptcy can happen due to either liquidity or solvency crises, and if a financial firm goes bankrupt, its portfolio is sold to the market. Demand for credit is set by the economic cycle, and there are fundamentals bubbling along too with random defaults of entities issueing bonds and taking loans.

Next, set some rules for the banks and brokers. We can start with the current regulatory capital rules. Banks will have a capital structure with both a term structure of debt and equity, and they will have to capitally adequate at all times. The same goes for brokers, but they can have different reg cap rules in general to model the SEC vs. FED divide.

Now the game. There are two classes of players. The first class is the bankers: they define trading rules for an individual bank. They can’t dictate transactions; rather, they write rules which determine what transactions a bank will do, depending on market conditions. There can be as many bankers as there are banks, but balance sheet size and initial capital is allocated randomly to players at the start of the game subject to plausible distributions.

The game proceeds by the simulation being run through time; this is then repeated many times. The banker’s payoff is the average of the positive part of the bank’s profit averaged across all the simulations. So, like the real world, these guys score higher if their banks make a lot of money in a variety of conditions.

The second class of players is the regulator. This player rewrites the rules that the banks must obey. Their score is based on the number of bankruptcies and both the volatility and level of credit supply: basically they score highly if there are no bank failures and credit grows slowly but steadily.

With sufficient (= a lot of) computing power, you could have a number of people playing as regulators, each simultaneously facing all the bankers. You could even use genetic algorithms or any other adaptive strategy you like as the regulator. It would be fascinating to see what rules emerged as winners.

There is a lot more you could do, too. For instance, you could impose a change on fundamentals and see what happened. You could road test new rules and see how players game them. You could with a bit of work find out what market dynamics lead to most bankruptcies, or the biggest systemic crises. You could see what bank strategies are most profitable but lead to high tail risk. It might not be a popular as world of warcraft, but I bet if you got the user interface slick enough, quite a few financial services people would play, and all that expertise could be used to improve the capital rules. The key point is that even if you don’t believe the results of the simulation are realistic, having something that suggests financial system vulnerabilities on the basis of actual dynamics and attempted gaming of the system could be quite useful.

Why we do crazy things July 11, 2010 at 11:07 am

Rajiv Sethi makes an important and subtle point in a post on Naked Capitalism. He is discussing the behaviour finance literature, and in particular the idea that failure to correctly estimate the probability of bad outcomes leads to the design of unsafe securities that look safe:

…what troubles me about this paper (and much of the behavioral finance literature) is that the rational expectations hypothesis of identical, accurate forecasts is replaced by an equally implausible hypothesis of identical, inaccurate forecasts. The underlying assumption is that financial market participants operating under competitive conditions will reliably express cognitive biases identified in controlled laboratory environments. And the implication is that financial instability could be avoided if only we were less cognitively constrained, or constrained in different ways — endowed with a propensity to overestimate rather than discount the likelihood of unlikely events for example.

Now this is a little unfair in that the authors don’t make the explicit read across from ‘if people are wrong about the likelihood of crashes, then they produce overpriced securities which will fail catastrophically in a crisis’ to ‘overpriced securities which failed catastrophically in a crisis were produced, therefore people mis-estimated tail probabilities’. But certainly the authors invite such a reading, so Rajiv’s comment is reasonable. It is next part of his argument that really resonates though:

This narrowly psychological approach to financial fragility neglects two of the most analytically interesting aspects of market dynamics: belief heterogeneity and evolutionary selection. Even behavioral propensities that are psychologically rare in the general population can become widespread in financial markets if they result in the adoption of successful strategies. As a result, asset prices disproportionately reflect the beliefs of investors who have been most successful in the recent past. There is no reason why these beliefs should consistently conform to those in the general population.

I think that this is right, and it deserves to be better understood. I would even go further, because this argument neglects the explicitly reflexive nature of market participant’s thinking. (Call it social metacognition if you really want some high end jargon.) Traders can both absolutely understand that a behavioral propensity is rare and likely to lead to catastrophe and behave that way: they do this because they believe that other market participants will too, and behaving that way if others do will make money in the short term. Even if you think that it is crazy for (pick your favourite bubblicious asset) to trade that high, providing you also believe others will buy it, then it makes sense for you to buy it along with the crowd. Moreover, worse, you may well believe that they too think it is crazy: but all of you are in a self-sustaining system and the first one to get off looks the most foolish (for a while). Most people are capable of spotting a bubble if it lasts long enough: the hard part is timing your exit to account for the behaviour of all the other smart people trying to time their exit too.

Reflecting Trichet May 19, 2008 at 7:38 am

Tate MirrorsA speech by Jean-Claude Trichet at the International Capital Market Association’s Annual Conference has received considerable comment – see here for instance for Alea. I’d like to focus on just two aspects in a long speech. First, leverage:

The challenge lies in preventing the system from feeding on itself through a spiralling process of leveraging…

Abundant liquidity, financial complexity [and] financial players’ incentive structures contrived a convergence of mechanisms that resulted in the upward spiralling of asset prices, further leveraging, increasing complexity and shrinking transparency.

The combination of leverage and complexity is a massive concentrator of model risk. Simple leverage is easy to spot: my broker won’t let me buy stock on margin with a 1% haircut. But a tranche of a CDO-squared financed via repo could have a much higher leverage than 100:1. Complexity, then, can hide leverage and it can make modeling the true return distribution difficult or impossible: just because you can structure something does not mean that you should, as Bankers’ Trust found out with Gibson’s Greetings and Libor-squared swaps.

Second, ECB open market operations:

The first response of the Eurosystem during the turmoil was to try to keep very short-term money market rates near policy rates through more active liquidity management. With this objective in mind, the ECB adjusted the distribution of liquidity supply over the course of a maintenance period, by increasing the supply at the beginning of the period and reducing it later in the period (the so-called frontloading). The average supply of liquidity remained unchanged over the whole reserve maintenance period, in line with the Eurosystem’s aim to provide to the banking system over each maintenance period the exact amount of total liquidity it needs to fulfil its liquidity deficit. The average size of the Eurosystem’s refinancing operations for the maintenance periods since August 2007 remained at around €450 billion, as in the first semester of 2007.

This is remarkable — and it makes the point beautifully that is is not the supply of cash that has been important in central bank financing operations, but rather guaranteed funding for illiquid assets. The ECB has indeed been remarkably liberal (by the standards of other central banks pre Crunch) in the collateral it permits at the window, and so it did not need to explicitly manipulate liquidity premiums the way the FED did.

Grand Theft Banking April 29, 2008 at 7:22 am

The next instalment of the popular computer game Grand Theft Auto is out today (or, for you really hardcore gamers, midnight yesterday). Its launch prompts me to consider how gaming could help finance, beyond the extra carry from all of those copies of the game bought on credit cards. So how about this: design a game that’s the financial system. It has deposit takers, hedge funds, investment banks, pension funds, the lot. It has a diversity of different asset classes too with real time prices. It also has shareholders, depositors, deposit protection, regulation, the interbank market, whatever you want. Your mission, player, is to set the regulations to prevent bubbles, protect depositors, allow moderate growth, and prevent moral hazard. Covertly of course the BIS will be monitoring your progress and any really good ideas get put into Basel 3.