Category / Decision Making

Looking gift horses in the mouth… May 16, 2012 at 8:11 am

… is a good idea. Sallie Krawcheck writes on the HBR blog:

Because the barriers to entry are low, there’s usually no good reason why returns in an institutional banking business should stay very high for an extended period. Competition should drive those returns down. As a result, sustained high returns on equity — especially higher returns than competitors are earning — can be a sign of impending trouble. They might mean a business is taking outsized risks, or misunderstanding the risks it is taking, or is skirting too close to the regulations. Not all high-return businesses crash, but variations on the comment “In hindsight, the returns were probably too good and too steady” are all too common in the financial sector.

Update. A related issue is that high and stable returns create a huge pressure to grow the business. Then there is a risk that you become too big relative to the size of the market. As Aleph says, citing the examples of AIG, Equitable and LTCM:

Anytime you get a large fraction of the market’s volume, you should stop, and re-evaluate. You’re probably doing something wrong.

The Act part January 3, 2012 at 5:28 pm

Ever since a helpful comment by Hans on my post about what financial risk managers can learn from flight safety, I have been meaning to write about OODA loops. OODA stands for observe, orient, decide, and act; the concept is quite a helpful way of thinking about managing risk.

According to it’s inventor, USAF Colonel John Boyd, the OODA loop is a useful concept because once you figure out that this is what you are doing, you can tighten the loop and so make decisions faster than the next guy. You need to work on all four parts of the loop, and it is this segmentation which I think is most useful:

  • ‘Observe’ is obvious. Decide on your risk parameters and get the data*.
  • ‘Orient’ is more interesting. The OODA model explicitly includes a cultural element. Wikipedia quotes a Boyd paper which says that ‘the repository of our genetic heritage, cultural tradition, and previous experiences’ are ‘the most important part of the O-O-D-A loop since it shapes the way we observe, the way we decide, the way we act’. In other words, since culture is an important implicit element in decision making, you had better build it into your model of how you make decisions.
  • ‘Decide’ is also obvious. You have the data, you have analysed it based on your model of the world – figure out what to do next.
  • ‘Act’ – then do it.

The usual picture is this one (click for a larger version):

OODA loop

What we get wrong in financial risk management so often is never getting to the DA. We measure a lot of stuff. We have committees and reports and limits and such like. But the really big risks are often not acted upon because we are oriented so that we cannot decide. Look what happened to those people who tried to act against the firm’s orientation at MF Global, or Enron or HBOS.

Once a year – and why not make it the committee’s first meeting? – the group risk committee should ask themselves what features of the firm’s culture stop them from making effective risk decisions, and how they can be changed.

*Interestingly, some critiques of the OODA loop paradigm in military decision making are based on the inevitable tendency for what you observe to be narrowed to just those things that seem to help with decision making:

The result of this type of thinking is to spend a lot of time narrowing the focus of what we choose to observe in order to better orient and decide. This drives one to try and reduce the noise associated with understanding the problem.

To me, this is a risk, but not one confined to this approach. You always need to ask the question ‘what am I missing?’ in any risk measurement situation.

Prices as modified Schelling Points September 3, 2011 at 7:14 pm

This idea comes from Doug’s comment to my last post. First, what is a Schelling Point?

From Wikipedia (mildly edited):

Tomorrow you have to meet a stranger in New York City. Absent any means of communication between you, where and when do you meet them? This type of problem is known as a coordination game; any place in the city at any time tomorrow is an equilibrium solution.

Schelling asked a group of students this question, and found the most common answer was “noon at (the information booth at) Grand Central Station.” There is nothing that makes “Grand Central Station” a location with a higher payoff (you could just as easily meet someone at a bar, or in the public library reading room), but its tradition as a meeting place raises its salience, and therefore makes it a natural “focal point.”

The crucial thing then is that Schelling points are arbitrary but (somewhat) effective equillibrium points. (For an interesting TED talk on Schelling points, try here.)

Schelling won the Nobel prize in Economics in part for the Points which bear his name. But what do they have to do with prices?

Well, in a sense a price is Schelling point. Two people need to agree on it in order to trade. There is no particular reason that BofA stock at $7.25 is a better price than $5 or $10; sure, stock analysts may well disagree, but I am willing to bet that few of them could get to $7.25 for BofA based on publically available information excluding prices.

As Doug says, this is even more the case for an illiquid financial asset. Here there are few prior prices to inform the decision as to what solution to propose to the Schelling coordination problem. A proper appreciation of the arbitary nature of the problem is required here.

Note, by the way, that I called a price a modified Schelling point. This is because, unlike a typically Schelling problem, you often know the solution that others have picked because you can often see the prior prices at which assets have traded. After all, the Schelling problem for strangers meeting in New York is a lot easier if you know the most common answer is ‘Grand Central Station at noon’.

I like the way that the metaphor of price-as-Schelling points emphasises the arbitrary nature of prices. Another good thing about it is that it highlights that buyer and seller together construct the equilibrium. If we say the answer is PDT at 9pm, then it is. Besides, PDT serves Benton’s old fashioneds, which are apparently things of genius, so this solution has particular merit. Note that I can make this solution more plausible by publishing maps of PDT, linking to positive reviews, etc. – think of this as the equivalent of equity research. I make my solution better known so that there is more chance that you will pick it too.

Great leaders realise that they have little control July 30, 2011 at 12:07 pm

In a fascinating article in the WSJ, Nassir Ghaem makes two interesting links. First he points out that many of our best leaders in difficult times were (in the clinical sense) depressed:

When not irritably manic in his temperament, Churchill experienced recurrent severe depressive episodes, during many of which he was suicidal. Even into his later years, he would complain about his “black dog” and avoided ledges and railway platforms, for fear of an impulsive jump. “All it takes is an instant,” he said.

Abraham Lincoln famously had many depressive episodes, once even needing a suicide watch, and was treated for melancholy by physicians. Mental illness has touched even saintly icons like Mahatma Gandhi and Martin Luther King Jr., both of whom made suicide attempts in adolescence and had at least three severe depressive episodes in adulthood.

Ghaem gives too many examples for us to conclude that there is no association here. So then he asks the obvious question, namely how might depression make for better leaders. His answer is rather interesting:

“Normal” nondepressed persons have what psychologists call “positive illusion”—that is, they possess a mildly high self-regard, a slightly inflated sense of how much they control the world around them.

Mildly depressed people, by contrast, tend to see the world more clearly, more as it is. In one classic study, subjects pressed a button and observed whether it turned on a green light, which was actually controlled by the researchers. Those who had no depressive symptoms consistently overestimated their control over the light; those who had some depressive symptoms realized they had little control.

For me this makes complete sense. A leader has to know what he or she can control, and what they cannot. They have to replan when things go wrong. They need strength to keep going, and belief in their cause, but also a very cleared eyed understanding of what is currently possible. So given that pragmatism seems strongly associated with individuals prone to depression, perhaps we should value these people more. Indeed, arbitrarily drawing any line between ‘normal’ and ‘mentally ill’ (as Foucault amongst many other discussed) risks discarding a huge amount of talent on the called-mad side of the line. In the good times that waste is shameful; in the bad, it is the difference between a Churchillian victory and disaster.

The manager as deity January 26, 2011 at 6:06 am

Aditya Chakrabortty writing in the Guardian rightly points out the dangerous tendency for CEOs to act more like cult leaders than managers. He gives Steve Jobs as the architypical example of this tendency:

Jobs is part of a widespread trend among chief executives to put themselves forward not as managers, but as leaders. Follow the coverage of the Davos summit this week and count the number of times a corporate finance officer for some accounting-software company or other is described as a business leader…

If this were just self-delusion, then the consequences would not be too bad. But of course it has been used to justify the enormous increase in CEO compensation compared with the average worker:

The result has been a tremendous boost in chief-executive power, according to Dennis Tourish, an academic at Kent University. He points out that just before Enron imploded it was run like a cult, with Jeff Skilling exercising huge control and influence over who was recruited, how they worked – and who got laid off. And while Enron was an exception, Tourish points out that Jack Welch as boss of General Electric had similar power. “Business leaders typically only have yes-men – no one to stand up to them.”

These days you get to the top by taking bold decisions – which bold decisions doesn’t matter much as it is pretty much luck what will work – and then claiming the credit for whatever goes well while ruthlessly supressing talk of failures. Do that enough, and with luck you will get to the point where it becomes (for a while at least) self sustaining. You’re now a leader so you should be paid millions, right? Not so much…

Tranche discount factors October 7, 2009 at 9:40 am

In the bad mad days of 2005 and 2006, some people valued ABS by estimating the future cashflows and then discounting them back at Libor flat. In situations with significant prepayment risk, they might well have option adjusted this value to account for interest rate convexity based on some prepayment model.

That process does not produce values that are market consistent these days. The reason is there is compensation in the spread of an ABS for features other than default and interest rate optionality. These other factors include current liquidity risk, potential future liquidity risk, funding cost (remember you cannot necessarily repo an ABS), and the volatility of both the mark to market of the asset and its capital requirements. (The last two can just be thought of as a convexity adjustment for the volatility of EL and UL.)

The market handles this ‘problem’ rather crudely. The convention is to discount tranche cashflows at a rate higher than Libor. Thus for instance one might recover the market price of a AAA tranche by discounting at Libor plus sixty, while a BB tranche might have to be discounted at Libor plus six hundred to get the right price.

These higher rates solve the problem at the expense of introducing an arbitrary step much like the use of implied volatility to recover market prices for options using the Black-Scholes formula. Just as options dealers think in terms of implied vol without for a moment believing that the underlying follows a diffusion, so ABS traders think in terms of these discount factors without believing that they are anything more than an ad hoc market adjustment. Clearly we still have a long way to go in being able to price ABS based on fundamental factors.

Sound words on defence spending May 16, 2009 at 6:36 am

Lewis Page writes sharply and well on defence procurement. He’s a sample:

defence manufacture brings us a measly billion or two in exports each year – and our arms industry requires the great bulk of the £15Bn defence materiel budget in spending to win us this rather paltry amount of trade.

Put that way, the claim that we need our massive – per capita larger than any other EU state – defence budget to support exports is clearly ridiculous. Does this spending provide credible independent capability? Page says no, and gives detailed examples:

the Prime Minister can fire his American-made Trident missiles without asking Washington frst. But he cannot expect his supposedly ‘Britsh’ or ‘European’ systems to keep operatng through a normal-length war if US support is cut of. No, seriously. The Eurofghter contains so much US equipment that American consent is required for us to export it to Saudi Arabia, for goodness’ sake. EADS tells us openly that “the A400M will beneft from use of American content”. The command system for the Nimrod is being made by Boeing. The Future Lynx uses American engines.

On a day when the shameful truth of defence procurment – that British soldiers are dying because we can’t get them the kit they need – is once more emphasised, it is time to face the truth. Defence spending isn’t just absurdly high in the UK: despite that, it does not give the ordinary soldier, sailor and flyer what they need and deserve.

If you want to subsidise exports, then support the most efficient exporters. They are not defence companies. If you want capable weapons systems, then buy the best ones that you can afford*. And if you want to send men out to fight, then you have a moral obligation to kit them out properly: that imperitive over-rides any possible national interest in a particular manufacturer or procurement process. We don’t just need to cut defence spending: we need to spend smarter and more ethically.

*Subject of course to acceptable ‘will they support it’ risk. One might like to consider for a moment in this context whether the Sukhoi-30 is a better bet for the RAF than the Eurofighter…

Update. A correspondant with considerable knowledge of the defence industry points out that even when UK defence companies can meet a procurement objective, they are more expensive than their civil equivalents because they cannot meet the same timescales and margins. A combination of the shelter provided by captive defence spending and the overhead (both in cost and in putting off some staff) of security means that they simply are not lean and mean enough. If you want a van, go to Ford or Toyota or Renault – don’t go to someone who makes tanks.

Mervyn being sensible March 6, 2009 at 11:53 am

The full transcripts of Mervyn King’s evidence to the Treasury select committee are not up just yet, but this morsel struck me as very much on target:

It is moral hazard that has led us to where we are. I don’t want to blame anyone. All the players have acted rationally given the positions they were in.

What works September 7, 2008 at 7:09 am

WorkingSometimes, just sometimes, you read something so good that it makes every other piece of journalism you’ve read recently seem thin, dull, and without insight. Ross McKibbin’s article in the current LRB is that good.

McKibbin cut through the rhetoric admirably. He points out the hollowness of Blair’s promise to go with ‘what works’, indeed to the very antithesis of it:

The culture of the focus group does not, however, lead to an apolitical politics. On the contrary, it reinforces the political status quo and encourages a hard-nosed, ‘realistic’ view of the electorate that denies the voter any political loyalty, except to ‘what works’. ‘What works’, though, is anything but an objective criterion: these days it is what the right-wing press says ‘works’. The war on drugs doesn’t work; nor does building more prisons; nor, one suspects, will many of the anti-terror laws. But that doesn’t stop ministers from pursuing all of them vigorously. New Labour in practice is much more wedded to what-works politics than the Conservatives were under Thatcher, who was openly and self-consciously ideological.

Much of the present malaise in British politics flows from this. Among other things, what-works gives the wrong answers.

He also points out, amusingly, that we do in fact have three parties in parliament. They are just not the three parties whose names appear on the ballot paper. A more accurate arrangement based on ideology rather history would have:

A party of the moderate left, undoubtedly led by Vince Cable, which would include some Labour backbenchers (but no member of the present government), some Lib Dems (but probably not their leader), and perhaps Tories like Kenneth Clarke and Ed Vaizey. There would be a centreish party which would include Brown, some members of the cabinet, most Lib Dems, a large part of the Parliamentary Labour Party, probably William Hague, Theresa May, Alan Duncan and a few other Tories; Cameron and Osborne might be honorary or temporary members. The party of the right would include everyone else (including many members of the government).

There is much else of value in the full article and I would encourage you to read it. But even if you don’t, at least rejoice that there is still journalism of this quality going on in this country.

Fallacies of Planning September 1, 2008 at 8:03 am

I read a post on Overcoming Bias that referenced the planning fallacy. I thought this was going to turn out to be more interesting that it did, so instead of the original content, let me propose a different planning fallacy.

The fallacy is simply that people tend to believe that executing the plan will achieve the desired objective and only that. We are used to plans working in simple cases: I’m hungry, so I make a plan to go to the fridge. I execute the plan and lo, food is mine.

We are even used to plans not working: I might slip on the way to the fridge and end up on the floor swearing rather than happily nibbling some dairy delicacy. But typically if a simple plan works, then the consequences are simple and easy to guess. The amount of cheese in the fridge decreases. Big deal.

Plans for complex objectives, however, usually involve unexpected (and therefore by definition unintended) consequences. Lowering rates doesn’t lower Libor because banks hoard cash. Saying you will protect the Agencies does not reassure the markets because they don’t believe you or they don’t know what ‘protect’ means, exactly — or for some other reason entirely. And so on. I suggest that the study of planning failures (and successes) should be compulsory for politicians and economists.

Beware certainty August 20, 2008 at 8:02 am

Wise words from Doug Kass, via Big Picture:

I continue to listen to and read a lot of convicted opinions for instance, the market has bottomed, financials have bottomed, oil has topped, stocks are enormously undervalued against historic measures…

I would put those convicted opinions in a locked closet

Quite right. The more convinced someone is that they are right about an unknowable future, the less weight I place on that forecast. Blessed be the doubters, for they shall inherit positive alpha.

Why I like $140 oil July 8, 2008 at 12:10 pm

A surprisingly not ill-informed and annoying article by George Monbiot (isn’t it nice when someone who is usually foolish says something sensible?) considers the good things about $140 oil. One of them is that it is stopping a lot of unsustainable fishing:

No east Asian government was prepared to conserve the stocks of tuna; now one-third of the tuna boats in Japan, China, Taiwan and South Korea will stay in dock for the next few months because they can’t afford to sail. The unsustainable quotas set on the US Pacific seaboard won’t be met this year, because the price of oil is rising faster than the price of fish. The indefinite strike called by Spanish fishermen is the best news European fisheries have had for years. Beam trawlermen – who trash the seafloor and scoop up a massive bycatch of unwanted species – warn that their industry could collapse within a year. Hurray to that too.

Let me add to that. Hurray if the oil price ruins the road transport industry. We should be sending much more cargo by rail and river anyway. Hurray if it causes people to drive less and to buy smaller and less polluting cars. Not only should Gordon go ahead with higher vehicle duty on the most polluting cars, he should extend that idea to lorries, planes, and indeed every other source of pollution. The only way to realign the economy to the post carbon age is to get the incentives right. $140 oil helps, but $200 or $250 oil would be even better.

Update. The high oil price appears to be working in Washington. According to a Washington Metro press release:

Twenty of Metrorail’s top 25 highest weekday ridership days have occurred since April of this year.

Quantitative finance ideas in decision making May 29, 2008 at 8:38 am

Suppose you have a decision to make and you know some quantitative finance. How can you use what you know to help in your decision?

First you construct a outcome metric. This is just a function that expresses whether one distribution of outcomes is better or worse than another. Next you deduce the distribution of outcomes for various choices, often by building a model of how the initial conditions determine the outcome, then determining the distribution of initial conditions and putting that through the model*. Apply the metric and make the decision.

Now here’s where it gets interesting. Being a quantitative person, you know that there is model risk. Specifically here three kinds of model risk:

  • Your outcome metric might be wrong. One common way this can happen is that there is something that you have neglected entirely – unexpected consequences.
  • The distribution of outcomes is wrong because the model is wrong.
  • The distribution of outcomes is wrong because the future does not behave like the past.

So, knowing this, we need to continue monitoring the decision, updating both our outcome metric and our model, to ensure that with new information our decision remains correct.

Two things stimulated me to write this account: one was the shameful mess that is UK energy policy mentioned earlier in the week; the other was hearing an item on a new book Mistakes Were Made (But Not by Me) on the Today programme. The book discusses how, when faced with evidence that our decisions are bad, rather than recant and change our minds, we engage in self justification. I will let the authors take over at this point:

The engine that drives self-justification … [is] cognitive dissonance. Cognitive dissonance is a state of tension that occurs whenever a person holds two cognitions (ideas, attitudes, beliefs, opinions) that are psychologically inconsistent, such as “Smoking is a dumb thing to do because it could kill me” and “I smoke two packs a day.” Dissonance produces mental discomfort, ranging from minor pangs to deep anguish; people don’t rest easy until they find a way to reduce it. In this example, the most direct way for a smoker to reduce dissonance is by quitting. But if she has tried to quit and failed, now she must reduce dissonance by convincing herself that smoking isn’t really so harmful, or that smoking is worth the risk because it helps her relax or prevents her from gaining weight (and after all, obesity is a health risk, too), and so on. Most smokers manage to reduce dissonance in many such ingenious, if self-deluding, ways.

One of the most obvious examples is of course Blair and the Iraq war, but there are many many more.

The quantitative way of thinking – acknowledging up front that we do not have all the relevant information (and might never have) so that we need to review the decision regularly – is a good way of depersonalising matters. We do not need to engage in self-justification because it is not our decision: it is the result of some modelling. If it goes wrong, we fix the model. Our course there may very well be opinions that go into the model, but by explicitly including the monitoring step we acknowledge before we make the decision that it might be wrong. That must be helpful.

* This is a stylised version of what any kind of economic capital model such as VAR does. The outcome metric in finance is sometimes obvious – it’s expected return, with more being better – and one of the many things that makes non-financial decisions harder is that such an obvious metric is not easy to find. In particular away from the risk neutral measure, how much compensation should you require for uncertainty in outcomes? (Oh, and if there are any mathematicians reading, it does not need to be a metric in the technical sense: a well-founded total order with all GLBs and LUBs will do.)

Trichet on asset price bubbles May 23, 2008 at 8:48 am

Jean-Claude Trichet made a speech in 2005 on Asset Price Bubbles and Monetary Policy. The full text is here. A few points leap out at me. Firstly Trichet raises the question as to whether there is such a thing as an asset price bubble:

I believe the NASDAQ valuation of the late 1990s was not excessive… [I] tend to believe that occasionally we observe behavioural patterns in financial markets, which can even be perfectly compatible with rationality from an individual investor’s perspective, but nevertheless lead to possibly large and increasing deviations of asset prices from their fundamental values, until the fragile edifice crumbles.

`Excessive’ is a difficult word and I can see why Trichet is cautious about using it. But certainly the fair value of debt securities is the result of many phenomena including funding premiums and liquidity premiums as well as long term default rates. Their spread can tighten leading to asset price growth if funding is cheap and liquidity is plentiful without this necessarily being irrational.

The problem knowing how much is too much means that Trichet is cautious about the possibility of identifying an asset price bubble:

I would argue that, yes, bubbles do exist, but that it is very hard to identify them with certainty and almost impossible to reach a consensus about whether a particular asset price boom period should be considered a bubble or not.

He suggests one definition of a bubble:

[There is] a warning signal when both the credit-to-income ratio and real aggregate asset prices simultaneously deviate from their trends by 4 percentage points and 40% respectively.

I agree, but I would have thought that liquidity and/or funding premiums and the availability of credit would also provide helpful warning signals. As Trichet says:

A bubble is more likely to develop when investors can leverage their positions by investing borrowed funds.

Interestingly (for 2005) Trichet points out the positive feedback in a bubble pricking of collateral:

A negative shock is likely to have a larger effect than a positive one. The reasons are that credit constraints can depend on the value of collateral and that in case of a financial crisis the whole financial intermediation process can in the worst case completely fail.

After those insights the conclusions are depressing:

With regard to the optimal monetary policy response to asset price bubbles, I would argue that its informational requirements and its possible – and difficult to assess – side-effects are in reality very onerous. Empirical evidence confirms the link between money and credit developments and asset price booms. Thus, a comprehensive monetary analysis will detect those risks to medium and long-run price stability…

I fully advocate the transparency of a central bank’s assessment of risks to financial stability and of its strategic thinking on asset price bubbles and monetary policy. The fact that our monetary analysis uses a comprehensive assessment of the liquidity situation that may, under certain circumstances, provide early information on developing financial instability is an important element in this endeavour.

In other words we will try to tell you when a bubble is inflating but, beyond targeting inflation, there is little we are going to do about it. And M. Trichet did indeed keep to the second part of that promise.

Grand Theft Banking April 29, 2008 at 7:22 am

The next instalment of the popular computer game Grand Theft Auto is out today (or, for you really hardcore gamers, midnight yesterday). Its launch prompts me to consider how gaming could help finance, beyond the extra carry from all of those copies of the game bought on credit cards. So how about this: design a game that’s the financial system. It has deposit takers, hedge funds, investment banks, pension funds, the lot. It has a diversity of different asset classes too with real time prices. It also has shareholders, depositors, deposit protection, regulation, the interbank market, whatever you want. Your mission, player, is to set the regulations to prevent bubbles, protect depositors, allow moderate growth, and prevent moral hazard. Covertly of course the BIS will be monitoring your progress and any really good ideas get put into Basel 3.

Decision making and compensation March 10, 2008 at 12:16 pm

Alea references a fascinating paper by Richard Herring and Susan Wachter, Bubbles in Real Estate Markets. This makes an important point about the stability of financial return distributions:

The ability to estimate the probability of a shock – like a collapse in real estate prices – depends on two key factors. First is the frequency with which the shock occurs relative to the frequency of changes in the underlying causal structure. If the structure changes every time a shock occurs, then events do not generate useful evidence regarding probabilities.

And of course this is the case in most (all?) financial situations. The next crisis is never like the last one, in part because trading behaviour is changed by memories of the last one, in part because of product innovation and the changing structure of financial intermediation.

The important question then is

How do banks make decisions with regard to low-frequency shocks with uncertain probabilities? Specialists in cognitive psychology have found that decision-makers, even trained statisticians, tend to formulate subjective probabilities on the basis of the “availability heuristic,” the ease with which the decision-maker can imagine that the event will occur. […]

This tendency to underestimate shock probabilities is exacerbated by the threshold heuristic (Simon(1978)). This is the rule of thumb by which busy decision-makers allocate their scarcest resource, managerial attention. When the subjective probability falls below some threshold amount, it is disregarded and treated as if it were zero.

Compensation policies make this worse. Suppose you buy insurance against a rare event that everyone else ignores. Your return suffers relative to them, and hence in most years (when the rare event does not happen), you are paid less. On the other hand, if the event does happen, the bank’s total income will almost certainly fall anyway, compensation will be tight, and you will likely not be rewarded properly for being the far-sighted manager you are. On the other hand, in the bad year those who did not buy the insurance acted like nearly all their peers, and hence will probably not be punished too badly.

Central bank policy: who wins? February 13, 2008 at 9:24 am

A few days ago, I mentioned in passing the idea that there is an experiment going on at the moment with central banks policy: it seems unlikely that the FED, the ECB and the Bank of England are all right in their reactions to the crunch. Larry Elliott focussed on the rates part of the puzzle, weighing up the pros and cons of the different levels of rates in USD, EUR and GBP versus the inflationary environment.

The other part of this puzzle is collateral, or more precisely what collateral is eligible at the window. Here too policy differs: the ECB for instance is remarkably generous in the range of collateral it permits banks to post at the window. Indeed posting RMBS as collateral with the ECB is basically keeping the Spanish banking system afloat (i.e. liquid) at the moment. Similarly the collateral eligible at the FED’s new TAF is rather widely defined, and the Bank of England perhaps belatedly has extended the range of admissible collateral recently.

As Willem Buiter points out, central banks have a duty to provide liquidity to the market in times of stress. He says furthermore (although the emphasis is mine):

There is no moral hazards as long as central banks provide the liquidity against properly priced collateral, which is in addition subject to the usual ‘liquidity haircuts’ on this fair valuation.

Right now I think it is reasonable to doubt if this collateral is properly priced. Certainly the idea that a bank can take a bunch of mortgages, package it up as RMBS, and repo it with the central bank without market scrutiny of any kind is bizarre. If this paper has never traded, why does the bank feel comfortable that it is worth par?

The wide range of eligible collateral and lack of scrutiny in valuation of that collateral has succeeded in liquifying the banking system. It was probably the right short term policy reaction. But as the situation improves central banks will need to wean the market off the crack cocaine of cheap liquidity with no questions asked about collateral value. How (if?) they do this may prove as significant as the path of rates for the final outcome.

What is rational? June 18, 2007 at 9:51 pm

Someone might not have read their Wittgenstein, let alone their Bakhtin. At Overcoming Bias, we find:


[People give] views on risks of nanotechnology even when […they] know that they do not know much about the subject and these views become strengthened along ideological lines by more facts. Facts do not matter as much as values: people appear to make a quick gut feeling decision (probably by looking at the word “technology”), which is then shaped by their ideological outlook.

[…]

This does not bode well for public deliberations on new technologies (or political decisions on them), since it seems to suggest that the only thing that will be achieved in the deliberations is a fuller understanding of how to express already decided cultural/ideological identities in regards to the technology. It does suggest that storytelling around technologies, in particular stories about how they will fit various social projects, will have much more impact than commonly believed. Not very good for a rational discussion or decision-making, unless we can find ways of removing the cultural/ideological assumptions of participants, which is probably pretty hard work in deliberations and impossible in public decision making.

Does the author believe that the ‘rational’ decision is something other than the average community decision? What does ‘rational’ mean, when we are talking about language, if it isn’t ‘what most people agree follows’? Is there a manual on correct deductions in English? Is the only person without ‘cultural/ideological assumptions’ the author? Or is the assumption that a certain mode of discourse is the only rational one itself just possibly a cultural/ideological assumption?

Errors in Cost Benefit Analysis June 12, 2007 at 8:15 pm

A recent Bloomberg article referring to the AEI-Brookings Institute paper Has Economic Analysis Improved Regulatory Decisions? made me think again about cost benefit analysis.

The paper condemns both the quality of cost benefit analysis used in determining the impact of regulation and the `tenuous’ use made by policy makers of that analysis. Undoubtedly that is partly for political or hegemonic reasons – cost benefit analysis sometimes comes to the `wrong’ conclusions – but I suspect it is also partly because the conclusions of a cost benefit analysis are sometimes not believed. The analyst may be at fault here for not stating the margin of error?

Error bars are common in experimental science: the fine structure constant, for instance, is known to roughly one part in a billion, and in any precise discussion we would state not 1/alpha = 137.035999710 but rather 1/alpha = 137.035999710(96) meaning that the reciprocal of alpha could be as high as 137.035999796 or as low as 137.035999624.

In cost benefit analysis this could be a very useful tool, especially as the error bars are much larger. To take the first example that google coughed up, an amusing cost benefit analysis of different law schools (where the cost is the fees and the benefit is the increase in expected salary after going to the school), the problem is that while the costs are fixed, the benefits aren’t. Not only do different individuals earn different amounts despite having the same education, speciality counts so that (in the perverse world in which we live) a tax lawyer earns more than a criminal defender. Moreover the reputation of various law schools will change over time effecting not just the earnings of current graduate but also those of past ones.

An even better example is one of the next hits, a discussion of the cost benefit analysis of rebuilding New Orleans after Katrina given its obvious hurricane risk. Here not only is the benefit uncertain, but so too is the cost. Any analysis with error bars would suggest at best ‘case not proven': that, rather than ‘cost > benefit’ or ‘cost < benefit' is often the best that we can conclude since it will often be the case that the intervals [cost - possible error in cost, cost + possible error in cost] and [benefit - possible
error in benefit, benefit + possible error in benefit] intersect.

Playing at the weekend December 12, 2006 at 7:26 pm

This is a fantastic area of countryside to the West of Exeter. I went walking with some friends on Saturday, and it was lovely: refreshing, beautiful and calming. It made me think about the importance of conserving country- side like this, of managing the transport and planning systems so that people can live close to where they work and travel between them efficiently. We have dysfunctional public transport system, a planning system that is about to be torn apart in the pursuit of unnecessary growth and a government with the inability to take responsibility for anything. If this toxic mix results in us losing views like this, we will have lost something truly valuable.

Part of the problem is that we do not think of the outcomes of policy decisions holistically. Why not demand that rules and laws are prefaced by a statement as to their intent and metrics to measure success? “This bill is intended to … Measure of success include…” Then we can understand what the authorities are trying to (as opposed to what they say they are trying to do), judge whether their metrics measure the desired outcomes or not, and so objectively hold them to account.

Think of those proposed revisions to planning: are they about economic growth or the protection of the environment? If, as I suspect Gordon Brown would claim, the answer is ‘both’, how do we balance those two goals? Once we know what it means to succeed at the game, we can analyse the strategies that people will take in playing it.