Category / Complexity

Planes not bridges December 10, 2011 at 7:30 am

If I had to pick an unconventional member for the financial stability board, I would seriously consider an aircraft safety expert. Let me explain.

Civil engineers know about one kind of safety; the safety of bridges and such like. The crucial thing about a bridge for our purposes is that the elements of it don’t change their nature when you change the design. They might change their role – whether then are in compression or tension say – but their physical properties are constant.

Two planes that kept us safe

Aircraft safety adds an element that civil engineers don’t have to worry about (much) – people. People react to the situation they find themselves in. They learn. Importantly, they form theories about how the world works and act upon them. Thus aircraft accidents are often as much about aircrew misunderstanding what the plane is telling them as about mechanical failure. The system being studied reacts to ‘safety’ enhancements because the system includes people, and hence those enhancements may introduce new, hard to spot error modes.

The report into the Air France 447 crash is an interesting example of this. See the (terrifying) account in Popular Mechanics here. As they say in their introduction to the AF447 Black Box recordings:

AF447 passed into clouds associated with a large system of thunderstorms, its speed sensors became iced over, and the autopilot disengaged. In the ensuing confusion, the pilots lost control of the airplane because they reacted incorrectly to the loss of instrumentation and then seemed unable to comprehend the nature of the problems they had caused. Neither weather nor malfunction doomed AF447, nor a complex chain of error, but a simple but persistent mistake on the part of one of the pilots

AF447 was, by the way, an Airbus 330, a plane packed to the ailerons with sophisticated safety systems. Not only didn’t they work, the plane crashed partly because of the way to pilots reacted to their presence.

Aircraft risk experts understand this kind of reflexive failure whereby what went wrong wasn’t the plane or the pilot but rather a damaging series of behaviors caused by the pilot’s incomplete understanding of what the plane was and wasn’t doing. This is often exactly the type of behaviour that leads to financial disasters. Think for instance of Corzine’s incomplete understanding of the risk of the MF Global repo position.

Another thing aircraft safety can teach us is the importance of an open, honest post mortem. Despite the embarrassment caused, black box recordings are widely available, at least for civil air disasters. (The military is less forthcoming, although things often leak out eventually – see for instance here for a fascinating account of the Vincennes disaster.) In contrast, we still don’t have FSA’s report on RBS, let alone a good account of what happened at, to pick a distressed bank more or less at random, Dexia. UBS is a beacon of clarity in an otherwise murky world.

It is hard to learn from mistakes if you don’t know many of the bad things that happened and what the people who did them believed at the time. Finance, like air safety, is epistemic: to understand it, you have to know something about what people believe to be true, as that will give some insight into how they will behave in a crisis.

The more I think about this, the more I think risk managers in other disciplines have to teach us financial risk folks.

The stock market as a distributed system October 12, 2010 at 8:04 am

My PhD is in computer science, specifically the theory of distributed computation, so I naturally tend to use that metaphor. It does make sense, though, for the stock market. After all, a lot of volume is being driven by exactly that: distributed computers interacting.

In The stock market as a single, very big piece of multi-threaded software at Ars Technica, Jon Stokes makes that point rather well. Talking about the Flash Crash he says

The market did what every piece of multithreaded software eventually does in response to just the wrong mix of execution conditions and inputs: it crashed.

Now there may be complex distributed systems with asynchronous interaction that have never crashed and do what is intended. But I have never come across any. So Jon’s point is reasonable. As he says:

The market is fairly fragile, which is about what you’d expect from a giant, multithreaded computer that has been brought online, piecemeal, with no oversight. The wrong input at the wrong moment could trigger a race condition, or a deadlock, a livelock, or some other concurrency hazard that brings it all down.

Perhaps the SEC should hire some of my old colleagues from the safety critical systems community.

In favour of the narrow bank model? May 7, 2008 at 8:25 pm

Naked Capitalism has an insightful commentator. I’m not sure if I agree with all of this, but it is certainly a helpful perspective:


Perhaps a lesson to be learned here is that liquidity acts as an efficient conductor of risk. It doesn’t make risk go away, but moves it more quickly from one investment sector to another.

From a complex systems theory standpoint, this is exactly what you would do if you wanted to take a stable system and destabilize it.

One of the things that helps to enable non-linear behavior in a complex system is promiscuity of information (i.e., feedback loops but in a more generalized sense) across a wide scope of the system.

One way you can attempt to stabilize a complex system through suppressing its non-linear behavior is to divide it up into little boxes and use them to compartmentalize information so signals cannot easily propagate quickly across the entire system.

This principle has been recognized in the design of software systems for several decades now, and is also a design principle recognizable in many other systems both natural and artificial (c.f. biology, architecture) which are very robust with regard to exogenous shocks. Stable systems tend to be built from structural heirarchies which do not share much information across structural boundaries, either laterally or vertically. That is why you don’t die from a heart attack when you stub your toe, your house doesn’t collapse when you break a window, and if your computer crashes it doesn’t take down the entire internet with it.

Glass-Steagall is a good example of this idea put into practice. If you use regulatory firewalls to define distinct investment sectors and impose significant transaction costs at their boundary that will help to reduce the speed and amplitude of signals which will propagate from one sector to another, so a collapse in one of them will be less likely casue severe problems in the others.

It worries me that we’ve torn down most of these barriers in the last several decades in the name of arbitrage, forgetting that the price we paid for them in inefficiency was a form of insurance against the risk of systemic collapse. This is exactly what I would do if I wanted to take a more or less stable, semi-complex system and drive it in the direction of greater non-linearity.

Happy Birthday Blog March 5, 2008 at 2:10 pm

Danger, Fragile

On the second birthday of this blog, it is perhaps worth spending a moment on the name. Deus ex Macchiato* is so named for two reasons: the God in the machine is one, as I’m interested in how the rules of a particular situation determine the behaviour. Sometimes simple constraints can have unexpected consequences – a great example is how the Market Abuse Directive prevented the Bank of England from intervening early in the Northern Rock crisis. The ‘Macchiato’ comes from a system that works really well: coffee in Italy. It’s cheap, it’s good, and everyone expects it to work. No Italian would expect a local bar to serve anything other than a great coffee, and few would pay more than a euro for it.

As I contemplate the brownish steamed milk that the average British coffee shop provides, it interests me how a small change in the rules can provide a big difference in outcomes. Whether you are a finance professional, an engineer, an IT specialist, a regulator or a politician, you might perhaps have reason to be interested in systems engineering seen that way. Your scheduled programme from the frontier resumes shortly.

*απὸ μηχανῆς θεός is a literal translation – a calque – from the Greek. Originally it referred to the actors playing Gods being lowered by a crane onto the stage. That might have spoilt the illusion, but it was the only way to achieve what was needed.

Vegas FrontierUpdate. There is an article by Jenni Russell today in the Guardian which gives another good example of how badly written rules and ill-chosen performance metrics can lead to undesirable outcomes.

Just imagine you are part of the government. Among your principal concerns are how to hold society together at a time of rapid change. You worry about social and community cohesion and the practical, psychological and economic isolation of the elderly, the disabled, rural-dwellers and the poor. You set up a Department of Communities and spend billions on initiatives to create thriving, sustainable communities that will offer a sense of community, identity and belonging. Sustainability is another key concern. You care about the planet and exhort people to make fewer car journeys and walk or cycle more.

You inherit, all around the country, a network of local offices which happen to provide many of the functions you seek. They give people access to cash, benefits and government services, as well as connecting them through the post. The majority are combined with a shop, which makes them a social hub and meeting point. The postmasters who run them are an informal source of support and advice on everything from benefit claims to what to do in the event of a death. In cities almost everyone lives within half a mile’s walk of one, and frequently their presence is what sustains a small shopping parade. In rural areas they allow people to lead local lives, and are often the last service left in places that have been steadily stripped of buses, shops and schools. So what do you do? In the name of economic efficiency, you take government business out of their hands, and then start closing them down, in their thousands. […]

The Post Office is not an independent actor. Its strategy is decided by the government which, as its sole shareholder, defines its purpose and the level of financial support. Labour has already shut 4,500 offices and made many more unprofitable by moving key business, such as the payment of pensions or TV licences, to banks or the net. Now it is demanding that the network must close 2,500 of the remaining 14,000 offices because they are making “unsustainable” losses of £200m a year. The government announces that it will carry on subsidising the network, at £3m a week, but only for the next three years. I asked the Post Office press officer what the company’s mission was. “To go into profit by 2011,” she said. What about community needs? “You’ll have to ask the government about that.”

What is so outrageous about this strategy is that the government is acting within completely artificial constraints. Separating the Post Office from Royal Mail 20 years ago, removing key functions five years ago, and defining the network as a business, are all political decisions, not a matter of economic fact.

What does safe mean? May 25, 2007 at 9:43 pm

It is an interesting question. Nothing is safe, 100% robust under any set of circumstances. If a two hundred foot high sea monster climbs out of the Thames and starts munching on Canary Wharf a few disaster recovery plans would doubtless be found wanting.

There are at least two issues. The first is to encourage people to be skeptical about the performance of any construction, mechanical, electronic or intellectual: there are some events that will screw up any design.

But then we come to the problem of how to estimate how unlikely these testing circumstances are. Typical operational risk events involve a concatenation of errors, of individually improbable circumstances. Sadly it seems that sometimes these events are not independent so that the joint probability of a screw up is much bigger than one might think. For that matter, the equity, credit, FX and interest rate markets often have low return correlations: but they can all move together in a crisis, as LTCM found out. It isn’t that a plausible worst case is bad — we knew that — it is that the worst case can be much more likely than it appears.

Methinks Krugman doth protest too much January 8, 2007 at 8:43 pm

I have just been reading an interesting article by Paul Krugman. Partly it is a discussion of The Theory of Comparative Advantage (yes, I had to look it up, embarrassingly: try this if you do too) and how poorly understood it is; but partly it is an (understandable) cri de coeur about the failure of supposed intellectuals to take simple economic models seriously. Perhaps it struck a chord because I saw a similar complaint earlier, at Overcoming Bias:


Consider how differently the public treats physics and economics. Physicists can say that this week they think the universe has eleven dimensions, three of which are purple, and two of which are twisted clockwise, and reporters will quote them unskeptically, saying “Isn’t that cool!” But if economists say, as they have for centuries, that a minimum wage raises unemployment, reporters treat them skeptically and feel they need to find a contrary quote to “balance” their story.

In short we, the ungrateful general populace, do not take economic models and their outputs seriously. Well, following that sage of economics Homer Simpson, Duh. Yes people don’t have the same respect for economic predictions as for ones from the physical sciences. Some reasons could include

  • Physics has a long and glorious history of successful predictions about the world. Economics has, I suggest, explained rather less about its corner of reality and some of its recent predictions have turned out to be wrong.
  • In Physics if experiment repeatedly and consistently fails to confirm a theory, then the theory is reworked to fit the facts. In Economics if a theory is repeatedly falsified, there seems to be rather more effort spent on explaining why those facts aren’t relevant than on figuring out what’s wrong. Moreover Economics has a relatively small ‘experimental’ community devoted to testing theories given the number of theorists.
  • Physics embraces complexity. It acknowledged the existence of chaotic dynamics early, for instance, and it has tried to find appropriate models of these systems. Similarly, philosophically problematic though quantum mechanics is, physicists are engaging with it. Economists seem to shun complexity and cling to principles like the rational self interested agent that seem to have little predictive power (vide Behavioural Finance) and whose day could well be past.

Uncomfortable though this may be to Economists, until they have something like the success of Physics to tuck under their belts, they aren’t going to get something like the same amount of respect.

The utility of complexity September 24, 2006 at 2:53 pm

A recent article in Salon (watching an ad required to enter) nicely captures one of the preoccupations of this blog. It begins with a discussion of an essay by Devesh Kapur, The Knowledge Bank. Paraphrasing slightly, Kapur suggests that economic consultants are not primarily motivated by finding successful outcomes:

The very nature of academia means that researchers […] are not accountable for the consequences [of their work] in the sense that it responds to professional incentives, not to development payoffs. These professional incentives place a large positive premium in academic papers on the novelty of ideas, methodological innovation, generalizability and parsimonious explanations. Detailed country and sector knowledge, an acknowledgment that the ideas may be sensible but not especially novel, that uncertainty and complexity rather than parsimony are perhaps the ground reality, are all poor country-cousins of research that purports to find universal truths.

Kapur is talking about the social sciences in general and developmental economics in particular, but the conclusions hold much more broadly. As Salon says:

One of the clearest fault-lines in economic debates […] is between those who believe they know one answer that fits all questions, and those who believe every question deserves a different answer — that what works for Singapore may not work for Somalia, that the circumstances of Bangladesh require a different approach than the conditions of Brazil. It’s a fault-line that transcends ideological differences between right and left, free trader and protectionist. On one side, a willingness to accept complexity and uncertainty also concedes that one may not know what the answer to a given question is; on the other, the rightness of the answer is taken as a given, and it’s the implementation that must be at fault; it’s never “I don’t know” and always “how did you screw it up?”

This is a lovely summary of a ubiquitous fallacy: just because we sometimes have to pretend that the world is simple to get on doesn’t mean we have to believe it. Incrementalism — trying something you think might work, seeing what happens, and adjusting your strategy based on the outcome — is surely the only rational response to a complex, partially comprehended reality. Bangladesh is different from Brazil, and 2006 Britain from 1990 Britain, for that matter. Yesterday’s remedies might work, but we should approach their application with a suitable sense of humility.