Category / Information Fusion

Dennett’s seven tips May 20, 2013 at 2:13 pm

The Guardian has an instructive article discussing Daniel Dennett’s seven tips for better thinking. Here’s a short summary (of their summary).

  1. USE YOUR MISTAKES. Try to acquire the weird practice of savouring your mistakes, delighting in uncovering the strange quirks that led you astray.
  2. RESPECT YOUR OPPONENT. Attempt to re-express your target’s position clearly, vividly and fairly… List any points of agreement… Mention anything you have learned from your target… Only then are you permitted to say so much as a word of rebuttal or criticism.
  3. THE “SURELY” KLAXON. When you’re reading or skimming argumentative essays… look for “surely” in the document and check each occurrence. Not always, not even most of the time, but often the word “surely” is as good as a blinking light locating a weak point in the argument.
  4. ANSWER RHETORICAL QUESTIONS.
  5. EMPLOY OCCAM’S RAZOR.
  6. DON’T WASTE YOUR TIME ON RUBBISH.
  7. BEWARE OF DEEPITIES. A deepity… is a proposition that seems both important and true – and profound – but that achieves this effect by being ambiguous. On one reading, it is manifestly false, but it would be earth-shaking if it were true; on the other reading, it is true but trivial.

Equity, debt, freezers and TVs April 29, 2013 at 9:37 am

My freezer is like equity, my TV is like debt.

Let me explain.

There is a push from electricity suppliers to distinguish appliances that need their power now – like TVs – from those that can wait a few minutes – like freezers. We might even imagine a situation where there are two types of plug connecting to two (virtual) networks with two tarifs: an expensive, power-on-demand one; and a cheaper, give me power when you can one (with some standard over how long it could be delayed etc.) This would typically be achieved using smart devices, so my freezer would say to the network that it needed half a kilowatt hour of electricity sometime in the next ten hours, and the network would deliver it when convenient, given the total load. The plug would contain a network as well as a power connector allowing the freezer to talk to the grid.

This allows much better peak management. You can deny power from freezers in that critical five minute commercial break at half time in the cup final, giving you a better chance of boiling all those kettles.

The problem with debt is not just that it is leverage, but also that it is demand liquidity. The debt holder can demand their payment now. (Thus contingent liquidity schemes like lines of credit are like pumped storage power: they allow a solvent but illiquid party to meet peak liquidity demand.) The two tarif idea is akin to suggesting that there is a place for delayable payment but otherwise senior contracts – like sub bonds but where it is genuinely acceptable to use the deferral feature. Indeed, Islamic finance has always had such an idea, and many firms in practice treat supplier invoices that way. The nice part about the smart freezer though is that there is in some sense a negotiation between the freezer and the grid: could we imagine a similar ‘I need $1M sometime in the next two days… OK I will get it to you on Tuesday at 7.30am’ idea in finance?

Prices as modified Schelling Points September 3, 2011 at 7:14 pm

This idea comes from Doug’s comment to my last post. First, what is a Schelling Point?

From Wikipedia (mildly edited):

Tomorrow you have to meet a stranger in New York City. Absent any means of communication between you, where and when do you meet them? This type of problem is known as a coordination game; any place in the city at any time tomorrow is an equilibrium solution.

Schelling asked a group of students this question, and found the most common answer was “noon at (the information booth at) Grand Central Station.” There is nothing that makes “Grand Central Station” a location with a higher payoff (you could just as easily meet someone at a bar, or in the public library reading room), but its tradition as a meeting place raises its salience, and therefore makes it a natural “focal point.”

The crucial thing then is that Schelling points are arbitrary but (somewhat) effective equillibrium points. (For an interesting TED talk on Schelling points, try here.)

Schelling won the Nobel prize in Economics in part for the Points which bear his name. But what do they have to do with prices?

Well, in a sense a price is Schelling point. Two people need to agree on it in order to trade. There is no particular reason that BofA stock at $7.25 is a better price than $5 or $10; sure, stock analysts may well disagree, but I am willing to bet that few of them could get to $7.25 for BofA based on publically available information excluding prices.

As Doug says, this is even more the case for an illiquid financial asset. Here there are few prior prices to inform the decision as to what solution to propose to the Schelling coordination problem. A proper appreciation of the arbitary nature of the problem is required here.

Note, by the way, that I called a price a modified Schelling point. This is because, unlike a typically Schelling problem, you often know the solution that others have picked because you can often see the prior prices at which assets have traded. After all, the Schelling problem for strangers meeting in New York is a lot easier if you know the most common answer is ‘Grand Central Station at noon’.

I like the way that the metaphor of price-as-Schelling points emphasises the arbitrary nature of prices. Another good thing about it is that it highlights that buyer and seller together construct the equilibrium. If we say the answer is PDT at 9pm, then it is. Besides, PDT serves Benton’s old fashioneds, which are apparently things of genius, so this solution has particular merit. Note that I can make this solution more plausible by publishing maps of PDT, linking to positive reviews, etc. – think of this as the equivalent of equity research. I make my solution better known so that there is more chance that you will pick it too.

Systems thinking, people thinking August 25, 2010 at 8:32 pm

I was going to amuse myself this morning taking apart a truly awful Felix Salmon posting on the use of the normal distribution in finance. (That’s what it is really about – it isn’t what Felix thought it was about when he wrote it, which is part of the problem.) But instead I am going to praise an insightful article by Chrystia Freeland in the NYT.

First she highlights an important cognitive bias:

Most of us respond better to personal stories than to impersonal numbers and ideas.

Then she discusses one of the consequences:

that same bias means we are drawn to stories about people, not systems. When it comes to the financial crisis, we want heroes and villains and what-he-had-for-breakfast narratives; we are less enthralled by analytical accounts of the global financial system and the cycle of boom and bust.

Chrystia is nice enough to suggest that this is the age of the systems thinker, that those of us who can do it – and if there is one thing that this blog is about, it is systems thinking – are the new upper class. Sadly I think she is wrong. Systems thinking has the potential to be a very powerful tool, and it has had many successes. But cognitive bias means that it is always fighting an uphill battle against personality-driven narratives. Systems thinking has a marketing problem which it needs to solve before it can become the new black.

Update. This comment from Ashwin is so pertinent that I am going to hoick it up to the main text (and edit it to remove the references).

John Sterman in his book Business Dynamics says the following: “A fundamental principle of system dynamics states that the structure of the system gives rise to its behavior. However, people have a strong tendency to attribute the behavior of others to dispositional rather than situational factors, that is, to character and especially character flaws rather than the system in which these people are acting. The tendency to blame the person rather than the system is so strong psychologists call it the “fundamental attribution error”. In complex systems, different people placed in the same structure tend to behave in similar ways. When we attribute behavior to personality we lose sight of how the structure of the system shaped our choices. The attribution of behavior to individuals and special circumstances rather than system structure diverts our attention from the high leverage points where redesigning the system or government policy can have significant, sustained, beneficial effects on performance. When we attribute behavior to people rather than system structure the focus of management becomes scapegoating and blame rather than the design of organizations in which ordinary people can achieve extraordinary results.”

Meta thinking and the market March 6, 2010 at 8:16 am

One habit that characterises the thoughtful trader or economic theorist is doubt. We see this time and again. Here for instance is Krugman, talking about the controversy of Malaysian capital controls in the late 90s:

In a more cosmic sense, though, the Malaysia story does illustrate just how totally wrong what passes for financial wisdom often turns out to be.

On the same day, we find the Big Picture asking this question:

more importantly, what might you be VERY wrong about?

Beunza and Stark, meanwhile, looking at fairly simple arb trading, say:

Through ethnographic observations in the derivatives trading room of a major investment bank, we found that traders use models… to look out for possible errors in their financial estimates.

Posting will likely be light next week as I am away, so let me leave you with some encouragement to be reflexive: what financial estimate might you have been horribly wrong about? It is a good question.

Knowns and Unknowns through the Crunch June 26, 2008 at 7:29 am

I have been reading Cassola et al.’s A research perspective on the propagation of the credit market turmoil from the most recent ECB research bulletin. The paper concentrates on information issues in the credit crunch. Here’s how I see matters there.

Before the crunch investors had some information on asset prices as many ABS were (or seemed to be) liquid. On the other hand there were two classes of information failure relating to asset risk: ignorance (where a risk taker simply had not tried to assess risk); and model failure (where the risk taker had tried, but had got the wrong answer because their model was mis-calibrated and/or failed to include all the pertinent risk factors).

After the crunch in contrast, investors had less information on asset prices but more information on the risk of ABS. Rapid asset prices changes combined with illiquidity caused rapid selling, model recalibration, and risk takers had more information.

Then however, understanding that ABS were illiquid, that prices were both falling and uncertain, and that it was unclear who had what, investors in financial institutions became concerned about their ability to assess bank credit quality. As Cassola et al put it:

the actual extent of exposures to the problematic instruments and the health of specific financial institutions becomes only gradually more known

Hence the interbank market freeze up, amongst other things.

It seems then that a good understanding of the interplay between what’s known, what’s known to be unknown, what’s thought to be known, and what you don’t even know you need to know is important in explaining the Crunch. (Why did Rumsfeld get so much grief for this? I think it was the smartest things he has ever said.) In particular new information can suddenly throw the spotlight on something you thought you knew but in fact you didn’t. Caveat emptor.

Black box trading and information fusion October 30, 2007 at 7:19 am

According to Wikipedia, Information Fusion

refers to the field of study of techniques attempting to merge information from disparate sources despite differing conceptual, contextual and typographical representations

The convention is to keep the term data fusion for the situation where all information is quantitative, and use information fusion for the broader problem of integrating quantitative and qualitative data.
Another authority says that data fusion

takes isolated pieces of sensor output and turns them into a situation picture: a human-understandable representation of the objects in the world that created those sensor outputs.

Basically then, whenever you have diverse data which you have to try to turn into a coherent picture, you are performing data or information fusion.

Unsurprisingly much of the academic interest in this area occurs in limited problem domains: figuring out where the planes are from radar and visual data, for instance, or combining multiple different sonar sources to get a more complete picture of what’s swimming around you. Many quantitative trading models of this class: they take feeds of market data and transactions and attempt to form a picture of where the market will go next. One simpler class of models, for instance, are basically trend followers. Often the idea of momentum is used: when markets are rising on increasing volume with low volatility, the models pile in, perhaps intensifying the rise. Decreasing volumes and/or rising volatility are sometimes used as triggers to reduce the size of the trade.

Many quantitative models, then, implicitly have a confidence estimate built in. When they strongly believe in their own predictions, they put a trade on. When they either don’t believe in them, or they cannot make a prediction, the trade is taken off.

This feature is important: quantitative trading has been described as picking up pennies in from of a steam roller, and certainly many trading strategies act like short gamma positions, making a little money when they work, but losing a great deal when they are wrong. A false positive – a trade that you don’t think will work and so don’t make but in fact would have been profitable – is a lot less bad than a false negative – a trade you do think will work but turns out not to. The magnitude of this issue can be seen from Morgan Stanley’s $480M one day quant trading loss.

For this reason, some quant traders use multiple models and only trade when all of them are giving the same signal. If the models are sufficiently different and do not share common assumptions this helps to reduce model risk.

It occurred to me recently that another approach might be to cast quantitative trading as an information fusion rather than a data fusion problem. That is, is there non-quantitative information that might be useful, in particular in avoiding false negatives by making the model more doubtful in situations where more care is needed? One of the anticedants here is the theory of prediction markets: when a large number of independent people have an opinion, a suitable weighting strategy can often lead to better predictions than any individual pundit. Note that I am not discussing analysts opinions here – there are clearly institutional biases at work there, and the history of collective analyst predictions is not that promising. Rather I am suggesting trying to use the commentariat, ideally as large a body of it as possible, as a signal akin to rising volatility. When enough blogs start to discuss a possible crash, that is a sell signal akin to rising volatility or rising market risk premiums. Such an information fusion based quantitative trading model would be of more use in global macro than in very short term applications like index arb, but the idea of using rising worry as a deleveraging signal could be interesting. Or it could just be a heap of potatoes.