Category / Quantitative Trading

He ain’t ergodic, he’s my brother November 17, 2013 at 1:57 pm

I have been meaning to blog for a while about ergodicity. I know, exciting stuff. Here’s the skinny:

Roughly speaking, a system is ergodic if the time average is the state space average. Suppose we have a financial asset with genuinely IID returns: then if we look at the average return over time, that will be same as the average return over all possible paths.

The key point here is that computing the phase space average requires that we can reasonably take multiple copies of the system and observe different paths. A coin, say, can be tossed multiple times, allowing us to see the whole phase space.

For most financial systems, this copyability is not present. It might be reasonable to attribute a probability of default to a company a priori, for instance, but a posteriori it either defaults or it doesn’t, and we cannot take multiple copies of it and see how many times it does in repeated experiments. All we can do is look at it through time.

Given that we can’t often measure a phase space average, it would be handy if many financial systems were ergodic. Unfortunately, as this Towers Watson post points out, they often aren’t.

Risk managers therefore need to be very careful to distinguish two situations:

  1. I have a lot of genuinely independent bets going at once; and
  2. I have one bet that I repeat multiple times.

The former might, for instance, be hedging lots of different single stock options (on uncorrelated stocks, not that there are such things); the latter would be hedging a rolling position on one stock. In the first case you can reasonably take the phase space average – so if I sell the options for more than the historic volatility and I have enough of them, I will on average make money. In the second, you can’t. Here running out of money/hitting your limits and being forced to close out are much bigger issues.

Do read the PDF linked to in the Towers Watson post for more: it’s insightful.

Market stability and quote latency August 2, 2010 at 1:48 pm

We know – or at least strongly suspect – that HFT activity has changed the nature of US equity markets, and likely others. Similarly, bots seem if not the culprit, at least one of the culprits in the flash crash. What should we do about it? Nanex (HT zero hedge) has a suggestion:

Add a simple 50 millisecond quote expiration rule: a quote must remain active until it is executed or 50ms elapses… it may be improved (higher bid or lower offer price) at any time without waiting for the expiration period.

I like it, but I think that 50ms is too short. In particular, you need long enough for all the exchanges to react to each other before quotes can be updated, so that there is some kind of equilibrium (to overuse a currently-in-vogue term). I’d say 300ms is better. That’s still three quotes a second. There is no possible real economy motivation for faster price discovery than that, and there is considerable need from financial stability for it to be no quicker.

How it trades affects how it behaves July 30, 2010 at 6:56 am

Part of a continuing series of facts that seem obvious, but are in fact news: the changing nature of equity market trading (more bots, more ETFs) has affected the dynamics of the market. (See here for a previous item.) Specifically, according to FT alphaville (who cite Barclays research) equity correlation has had a close relationship with the increased ETF volumes. Barclays say:

An important structural shift in the equity markets over the past few decades has been the advent of index funds as an alternative to actively managed mandates… Since the dominance of this kind of index-based component in the equity fund flows should logically lead to an increase in equity correlation, it is tempting to theorize that the secular shift in equity correlation documented above is driven by this effect.

Then do (a little) analysis and conclude

while equity correlation continues to be highly dependent on volatility, the rise in indexation has led to a permanent increase in its “base” level

File under ‘n’ for ‘not a surprise’ and ‘w’ for ‘why did you think you could count on a correlation to tell you very much about the comovement of the market anyway?’

And here comes Hurst, they think it’s all over! It is now! July 14, 2010 at 6:06 am

There has been some comment recently about a paper by Reginald Smith on the impact of high frequency trading (HFT) on market dynamics. I want to spend a little timing explaining what the paper says, roughly, and why it matters.

We can clearly demonstrate that HFT is having an increasingly large impact on the microstructure of equity trading dynamics… the Hurst exponent H of traded value in short time scales (15 minutes or less) is increasing over time from its previous Gaussian white noise values of 0.5. Second, this increase becomes most marked, especially in the NYSE stocks, following the implementation of Reg NMS by the SEC which led to the boom in HFT. Finally, H > 0.5 traded value activity is clearly linked with small share trades which are the trades dominated by HFT traffic. In addition, this small share trade activity has grown rapidly as a proportion of all trades.

So first, what is a Hurst exponent?

Roughly speaking, Hurst exponents measure autocorrelation or, even more loosely, predictability. If H is close to 0.5, the series is a random walk, or what we were told equity prices did in Finance 101. In particular, if H = 0.5, the idea of volatility makes sense, and we can quantify risk using volatility.

If H is bigger than 0.5, though, the series shows positive autocorrelation: roughly, it has very busy periods when volatility is high, and quieter low volatility periods. It switches regimes between these with no warning. Thus we might try to calibrate a simple risk model but if we are unlucky we will calibrate it to a low vol period and then when the high vol hits, our risk estimates are wrong.

So, what the paper seems to have proved (and I have not checked all the details) is that HFT has changed the nature of stock price returns from being a random walk (H = 0.5) to having significant positive autocorrelation. Increasingly we see quiet periods when not much happens followed by periods of intense volatility, and the change between these is unpredictable. Now notice the time period cited, 15 minutes or less. What is happening, then, is that HFT appears to be creating islands of high volatility amid an ocean of more stable prices. Something sets off a price change, which creates a flurry of HFT activity, exacerbating volatility; this then dies away over a period of minutes or hours.

Why does this matter to the ordinary investor? Simply that their trading might hit one of those flurries of activity, and they might well get a significantly worse price than average if it does. Moreover of course simple risk models such as VAR will be less and less accurate risk gauges the higher the autocorrelation. I suspect on the typical VAR one day holding period this does not matter much, but it might.

Finally, there is the issue that HFT might be increasing the risk of flash crashes. If autocorrelation is too high then the probability of very large deviations from the mean over short timescales increases dramatically. I have no idea if this research supports the idea that we have got to that point yet. But I do think that someone should find out.

The empirically optimal volatility for hedging short dated S&P 500 index options February 16, 2010 at 5:28 am

A draft of a new note of mine can be found here. Here is the abstract:

This note studies the optimal volatility for Black Scholes hedging in practice. The notion of hedging at a volatility which minimises the variance of the daily P/L of the hedged portfolio is introduced, and this measure is christened prophetic volatility. The relationship between realised volatility, implied volatility and prophetic volatility is studied for S\&P 500 index options during the period 1999-2009.

The main result is that prophetic volatility is close to realised for the whole of the data set; for quiet market conditions (such as pertained from mid 2003 to mid 2005); and for the market turbulence induced by the Credit Crunch. Implied volatility differs from prophetic for the whole period, indicating that it is at best an imperfect estimate of the right hedge volatility.

We review situations when prophetic volatility differs significantly from realised, highlighting the practical importance of gamma weighted realised volatility. Finally, the spread required to at least break even from selling calls is analysed for both prophetic and implied volatility.

Comments are welcome.

Quants, Lightbulbs and the Demise of the Financial System January 28, 2010 at 6:48 am

From Naked Capitalism reader Matthew G:

How many quants does it take to screw in a lightbulb?

Using ten racks of co-located blade servers, one quant can detect a janitorial inefficiency, step in between janitor and light fixture, and screw in 49,500 bulbs in less than a millisecond, keeping five hundred lightbulbs of profit.

Two quants competing with each other can screw in 99,998 bulbs in a millisecond, with each quant retaining a profit of one lightbulb.

When ten quant firms try to screw in a light bulb, the bulb explodes, the light fixture gets ripped from the ceiling, the building falls down, the entire electrical grid of the city of Greenwich shuts down, innocent civilians all over the world have their retirement accounts electrocuted, and the Federal Reserve has to give the counterparties of each quant firm five hundred million light bulbs to maintain the stability of the system.

Update. FT alphaville saves me from having an entirely frivolous post by referrring to this article at Trader’s Magazine. They say:

Bryan Harkins, an executive with the Direct Edge ECN, noted the market is “saturated” with high-frequency shops. He doesn’t expect overall industry volume to increase substantially in the next few years.

Volume, in the past three years, has doubled due to a large extent to the activities of high-frequency traders. Average daily volume is about 10 billion shares today. That compares to 5 billion shares in early 2007.

“Someone leaves a high-frequency trading shop to start a new one,” Harkins said. “You do a meeting [with them] and they say ‘We’re going to do 100 million shares a day.’ You get all excited with the next big account and then six months later they’re struggling to stay in business.” About half of Direct Edge’s volume comes from high-frequency trading firms, Harkins said.

[NYSE Euronext's Paul] Adcock noted the changes in volume at NYSE Arca’s top five high-frequency accounts mirror those of the VIX “almost perfectly.” And because most high-frequency strategies are similar, he adds, only the “biggest and fastest will make those strategies work.”

(Emphasis mine.) The comments above refer to the good times, too. Imagine what would happen if one of those big guys liquidates, or if we have a very high volatility episode with extreme decorrelations, as might happen for instance if there is a sovereign crisis. It won’t be pretty but we cannot say that we have not been warned.

I’ll take an alpha please Bob* December 1, 2009 at 2:50 pm

Dealbreaker says:

Robert Litterman is head of quantitative resources at Goldman Sachs Asset Management… And as he sees it, … quantitative hedge funds have to do a better job of making money for their clients. And in Litterman’s considered opinion, they need to find new ways of making money. New and non-quantitative, apparently.

We’re putting together data that’s not machine-readable.

I see. Any other pearls of wisdom?

You have to adapt your process. What we’re going to have to do to be successful is to be more dynamic and more opportunistic.

Totally worth the price of admission to the Quant Invest 2009 conference (flight to Paris not included). Thank you, Bob.

Now that is quite amusing, but perhaps a little unfair. What is clear is that you can make money for extended periods of time by being long liquidity premiums and short volatility. Many hedge fund ‘strategies’ are just versions of this strategy: get exposure to illiquid assets, leverage up, and hope there is not a flight to quality before you have got paid your 2 and 20. If you can guarantee your leverage through good times and bad (or are not leveraged at all and can lock investors in for long enough), this strategy is often successful even through a crisis. But if you have to sell into the storm, things will go rather less well.

One thing that might be interesting, then, is somehow measure alpha relative to probability of having to deleverage. That is, we ought to level the playing field between funds that generate high alpha at the expense of running the risk of having to sell into a crisis and those funds which generate less excess return, but which never have to deleverage.

*OK, some of you might not remember Blockbuster. It was a classic, in the sense of classically, heroically awful.

Quant funds and the field approximation April 21, 2009 at 6:42 am

It has come to general attention recently that many quant funds have had a terrible few months: see here for more details. Specifically the momentum following funds have suffered badly from the crap rally of the last little while.

What is going wrong?

One way to see the issue is to consider a technique that sometimes works well in statistical physics. Don’t panic, I promise there will be no hard maths. It’s like this. Sometimes, you want to know how a given something behaves inside a solid. Let’s make the something an atom (although it doesn’t have to be). Say this is an atom of impurity in an otherwise pure crystal. How will this impurity affect things?

One could attempt to model the interaction of the atom of impurity with its neighbours, and their neighbours, and so on. If you could do those calculations, the results would be precise. But often you can’t, because there are too many effects to consider, and too many other atoms which can exert an effect. So a good short cut is to first calculate the properties of a perfectly pure crystal, and then consider the impurity like a boat in the sea with these properties. You don’t model each neighbour separately, in other words: you model the combination of their effects as one thing. This is sometimes called the field approximation, as you are calculating the field of the pure crystal, and then looking about how the impurity behaves in that field.

This works pretty well for one impurity, and it isn’t too bad when there are many, providing the percentage of impurity atoms is low enough. But once the percentage of impurities starts rising to the point where one impurity interacts with another, the approximation starts to fail rather badly. Furthermore if the impurities change the way the crystal interacts with itself, then this also means that a new approach is needed.

And so it is with hedge funds. When there were few quant hedge funds interacting with a market that was mostly driven by real money, then the fund’s model of market behaviour was reasonable. But as the percentage of trading activity that was hedge fund originated increases, that model becomes less and less tractable. I suspect that phase transitions are possible, whereby the market dynamics suddenly change for only a small increase in the percentage of fund activity. What’s happening, then, is that we have too many hedge funds chasing too few arbs. Things won’t get better until the capital allocated to quant funds declines significantly.

Sociologists do models, kinda February 13, 2009 at 6:59 am

From Reflexive Modeling: The Social Calculus of the Arbitrageur by Daniel Beunza and David Stark:

Modeling entails fundamental assumptions about the probability distribution faced by the actor, but this knowledge is absent when the future cannot be safely extrapolated from the past…

By privileging certain scenarios over others, by selecting a few variables to the detriment of others, and in short, by framing the situation in a given way, models and artifacts shape the final outcome of decision-making. This … is the fundamental way in which the economics discipline shapes the economy, for it is economists who create the models in the first place…

…models can lead to a different form of entanglement. In effect, models can lock their users into a certain perspective on the world, even past the point in which such perspective applies to the case at hand. In other words, models disentangle their users from their personal relationship with the actor at the other side of the transaction, but only at the cognitive cost of entangling them in a certain interpretation.

Despite the focus on relatively uninteresting models (merger arb), this is an interesting paper for anyone interested in how traders really use models.

Prop trading tech February 9, 2009 at 7:32 am

A friend of mine is dipping a toe into writing code to support prop trading. So, in the spirit of telling tales out of school, let me make some observations about this business.

  • The users don’t know what they want. And they cannot spare the time to tell you even their incoherent and ill-posed ideas about what they think they want.
  • As soon as they get something, what they think they want changes.
  • What they want also changes are the market moves, as new research comes out, and as their ideas turn out to be worthless.
  • The less the analytics depend on one particular paradigm, the better. Flexibility is the key to profitability.
  • Expect that things will work for a while, then fail. This means you need to build in `does it still work’ measures which might help avoid sailing over a cliff.
  • Risk measures are nonsense, often, but you have to provide them anyway. Try to give both conventional (often silly but expected) and less conventional (more useful but unconventional) risk measures.
  • Remember that model error is onmi-present and that over-fitting is very common. Expect the model to fail unpredictably and catastrophically.

None of this is a problem as long as you are expecting it. Just don’t think that IT development for prop trading is linear…

Tarring Taleb January 21, 2009 at 12:44 pm

I have always been a little suspicious of Nassim Taleb. He seems to take too much pleasure in discussion of crises. And his first book — a very conventional account of hedging — isn’t actually very useful for actually running portfolios of options. Now a post on Models and Agents (an excellent blog I have only found recently) gives a more focussed critique:

the current crisis is not a black swan. Alas, the world’s economic history has offered a slew of (very consequential) credit and banking crises … So not only aren’t credit crises highly remote; they can be a no-brainer, particularly if they involve extending huge loans to people with no income, no jobs and no assets.

Taleb also recommends that we buy insurance against good black swans—that is, investments with a tremendous (though still highly remote) upside but limited downside. For example, you could buy insurance against the (unlikely?) disappearance of Botox due to the discovery of the nectar of eternal youth. And make tons of money if it happens.

And that surely is the point. Yes, the unexpected happens with considerable frequency. But knowing which black swan is more likely than the market is charging for is the hard part. Buying protection in the wings on everything is far too expensive to be a good trading strategy. If all Taleb’s observations amount to is the claim that being long gamma can sometimes be profitable, then they are hardly prophetic. What would be much more useful would be his analysis of when, exactly, black swan insurance is worth buying.

Backtesting quant strategies January 8, 2009 at 11:17 am

Felix Salmon has an excellent post today on quant strategies. As he points out, knowing whether you have something worth trading isn’t exactly easy at the moment.

When you backtest, do you backtest through the quant blow-up of 2007 and the stock-market meltdown of 2008? If so, do you really think that’s going to give you the kind of trading idea which will make money going forwards? And if not, then what do you ignore, and why do you ignore it, and what makes you think you won’t run into a third period of high volatility which will lie well outside any reasonable assumptions you might make?

Up until 2007, the problems with quant funds was that the models didn’t remotely conceive of the world as it transpired. Now, the problem with quant funds is that they can’t help but conceive of the world as it transpired.

This applies more broadly, of course. No one has any recent data on normal markets because there haven’t been any recently…

Avoiding failure in fundland April 9, 2008 at 1:58 pm

What is working for hedge funds in the current climate? Bloomberg has a discussion of what isn’t:

Hedge-fund titans James Simons and Stephen Mandel are showing the biggest losses of their careers in the $1.9 trillion industry’s worst start in more than a decade.

Simons’s $18 billion Renaissance Institutional Equities Fund declined 12 percent since its value peaked last May, investors with direct knowledge of the situation said. Mandel’s Lone Cedar Fund dropped about 10.6 percent from its high in December, according to people familiar with the fund.

What is interesting is that Simons and Mandel are very different kinds of manager: Simons is a quant whereas Mandel is an old-fashioned stock picker, albeit a highly respected one. Just to add colour, elsewhere there has been some discussion of hedge fund attrition and the evolution of models in quant funds. Where does this leave us?

  • Model risk has always been with us but with gapping markets, expensive funding, a renewed focus on counterparty risk, and a flight to quality now is not the time to be highly leveraged or to rely on any strategy which assumes short- or medium-term mean reversion. History, for a while at least, will not be repeating itself.

  • Your leverage should be calibrated on the assumption that asset prices can jump without you being able to trade, then your leverage provider will make a fundamentally wrong but just about justifiable margin call based on the most conservative mark to market they can come up with.
  • There is good money to be made in picking up fundamentally solid companies cheaply but again you can lose money in the short term so having enough capital and patience to wait it out is key. Do as Warren did and buy solid forward earnings for cash.
  • Remember that even if a model backtests well the very act of using it in any size changes the market dynamics. And the chances are that even if you aren’t using it in size someone else will be: see for instance Have we quants been brainwashed by Barra here.
  • Keeping your leverage low is important, but so is reducing both ordinary and alternative beta. There are a number of funds who are close to the edge at the moment, at least if we believe this source. Personally I prefer a more glib but funnier source here. But in any event, ensure the portfolio is well hedged until the chaos has subsided and bear in mind that there will be more bodies on the slab before this is over.

    Rabbits

    The first rule of the game is staying in the game, and the market really can remain irrational for longer than you can remain solvent. [It appears that Galbraith did not actually say this, but he ought to have done.]

Renaissance man December 20, 2007 at 7:47 am

Alea reports a talk that James Simons, founder of Renaissance Technologies, gave at NYU recently. Simons is a highly successful quant investor so his remarks are interesting. The part of the Alea article that really piqued my interest was:


[...] perhaps the most interesting observation came in response to a question posed by the moderator, Nobel Prize-winner Robert Engle: “Why don’t you publish your research, the theory behind your trading methods? If not while you are active in the markets, perhaps later on.”

Simons’ reply – there is nothing to publish. Quantitative investment is not physics. The markets have no fundamental, set-in-stone truths, no immutable laws. Financial “truth” changes constantly, so that a new paper would be needed almost every week.

The implication is that there is no eternal theorem of finance that could serve as an infallible guide through all the ages. Indeed, there can be no Einstein or Newton of finance. Even the math genius raking in $1 billion and consistently generating 30%-plus annual returns wouldn’t qualify. The terrain is just too lawless.

Simon’s view seems to me to be obviously true, although I don’t quite agree with the Alea spin. It isn’t that there is no law, it is that the law changes as the behaviour of market participants change. Yesterday’s arb is today’s theorem is tomorrow’s unrealistic simplification. As I said over a year ago, mostly the market trades based on the current orthodoxy. But big news changes that orthodoxy – as is happening at the moment in the liquidity markets, and so to make a lot of money you need to be willing to keep changing your theory of asset prices.

This neatly brings me to a related topic, the non-equilibrium nature of financial markets. In retrospect, Walras’ idea of an auctioneer groping towards equilibrium (word of the week – tâtonnement) is really unhelpful because it suggests that there is enough time for this process to be completed and equilibrium reached before the next piece of news hits the market. I don’t think this is true. Rather I conjecture that the process is much more like a game of tetherball, with each new news item changing people’s opinions and hence moving the market long before equilibrium is reached from the previous piece. The ball almost never hangs by the pole, so any theory which analyses where it will come to rest isn’t much use in determining who is going to win the game.

The last piece of the puzzle is the primary role of transactions. There are no prices without transactions to establish them – lots of transactions. So it is only opinions about asset prices which lead to trading that matter. You can be right for a long period about the fundamentals, but if your assumption about how fundamentals lead to trading is wrong then you will lose money. For example I called the weakness of Japan completely correctly through the 2nd half of the 1990s and first half of 2000s, but I was wrong for extended periods on dollar/yen because I hadn’t accounted for the actions of the BoJ and other market participants beliefs about the BoJ. To make a lot of money you need to predict what most other market participants trading-related beliefs will be and get your position on before they do. Predicting fundamentals is only useful if they will influence future trading: on the flip side, predicting wrong beliefs is just as good as predicting right ones if they pertain for long enough for you to make money.

Black box trading and information fusion October 30, 2007 at 7:19 am

According to Wikipedia, Information Fusion

refers to the field of study of techniques attempting to merge information from disparate sources despite differing conceptual, contextual and typographical representations

The convention is to keep the term data fusion for the situation where all information is quantitative, and use information fusion for the broader problem of integrating quantitative and qualitative data.
Another authority says that data fusion

takes isolated pieces of sensor output and turns them into a situation picture: a human-understandable representation of the objects in the world that created those sensor outputs.

Basically then, whenever you have diverse data which you have to try to turn into a coherent picture, you are performing data or information fusion.

Unsurprisingly much of the academic interest in this area occurs in limited problem domains: figuring out where the planes are from radar and visual data, for instance, or combining multiple different sonar sources to get a more complete picture of what’s swimming around you. Many quantitative trading models of this class: they take feeds of market data and transactions and attempt to form a picture of where the market will go next. One simpler class of models, for instance, are basically trend followers. Often the idea of momentum is used: when markets are rising on increasing volume with low volatility, the models pile in, perhaps intensifying the rise. Decreasing volumes and/or rising volatility are sometimes used as triggers to reduce the size of the trade.

Many quantitative models, then, implicitly have a confidence estimate built in. When they strongly believe in their own predictions, they put a trade on. When they either don’t believe in them, or they cannot make a prediction, the trade is taken off.

This feature is important: quantitative trading has been described as picking up pennies in from of a steam roller, and certainly many trading strategies act like short gamma positions, making a little money when they work, but losing a great deal when they are wrong. A false positive – a trade that you don’t think will work and so don’t make but in fact would have been profitable – is a lot less bad than a false negative – a trade you do think will work but turns out not to. The magnitude of this issue can be seen from Morgan Stanley’s $480M one day quant trading loss.

For this reason, some quant traders use multiple models and only trade when all of them are giving the same signal. If the models are sufficiently different and do not share common assumptions this helps to reduce model risk.

It occurred to me recently that another approach might be to cast quantitative trading as an information fusion rather than a data fusion problem. That is, is there non-quantitative information that might be useful, in particular in avoiding false negatives by making the model more doubtful in situations where more care is needed? One of the anticedants here is the theory of prediction markets: when a large number of independent people have an opinion, a suitable weighting strategy can often lead to better predictions than any individual pundit. Note that I am not discussing analysts opinions here – there are clearly institutional biases at work there, and the history of collective analyst predictions is not that promising. Rather I am suggesting trying to use the commentariat, ideally as large a body of it as possible, as a signal akin to rising volatility. When enough blogs start to discuss a possible crash, that is a sell signal akin to rising volatility or rising market risk premiums. Such an information fusion based quantitative trading model would be of more use in global macro than in very short term applications like index arb, but the idea of using rising worry as a deleveraging signal could be interesting. Or it could just be a heap of potatoes.