Category / Engineering

Planes not bridges December 10, 2011 at 7:30 am

If I had to pick an unconventional member for the financial stability board, I would seriously consider an aircraft safety expert. Let me explain.

Civil engineers know about one kind of safety; the safety of bridges and such like. The crucial thing about a bridge for our purposes is that the elements of it don’t change their nature when you change the design. They might change their role – whether then are in compression or tension say – but their physical properties are constant.

Two planes that kept us safe

Aircraft safety adds an element that civil engineers don’t have to worry about (much) – people. People react to the situation they find themselves in. They learn. Importantly, they form theories about how the world works and act upon them. Thus aircraft accidents are often as much about aircrew misunderstanding what the plane is telling them as about mechanical failure. The system being studied reacts to ‘safety’ enhancements because the system includes people, and hence those enhancements may introduce new, hard to spot error modes.

The report into the Air France 447 crash is an interesting example of this. See the (terrifying) account in Popular Mechanics here. As they say in their introduction to the AF447 Black Box recordings:

AF447 passed into clouds associated with a large system of thunderstorms, its speed sensors became iced over, and the autopilot disengaged. In the ensuing confusion, the pilots lost control of the airplane because they reacted incorrectly to the loss of instrumentation and then seemed unable to comprehend the nature of the problems they had caused. Neither weather nor malfunction doomed AF447, nor a complex chain of error, but a simple but persistent mistake on the part of one of the pilots

AF447 was, by the way, an Airbus 330, a plane packed to the ailerons with sophisticated safety systems. Not only didn’t they work, the plane crashed partly because of the way to pilots reacted to their presence.

Aircraft risk experts understand this kind of reflexive failure whereby what went wrong wasn’t the plane or the pilot but rather a damaging series of behaviors caused by the pilot’s incomplete understanding of what the plane was and wasn’t doing. This is often exactly the type of behaviour that leads to financial disasters. Think for instance of Corzine’s incomplete understanding of the risk of the MF Global repo position.

Another thing aircraft safety can teach us is the importance of an open, honest post mortem. Despite the embarrassment caused, black box recordings are widely available, at least for civil air disasters. (The military is less forthcoming, although things often leak out eventually – see for instance here for a fascinating account of the Vincennes disaster.) In contrast, we still don’t have FSA’s report on RBS, let alone a good account of what happened at, to pick a distressed bank more or less at random, Dexia. UBS is a beacon of clarity in an otherwise murky world.

It is hard to learn from mistakes if you don’t know many of the bad things that happened and what the people who did them believed at the time. Finance, like air safety, is epistemic: to understand it, you have to know something about what people believe to be true, as that will give some insight into how they will behave in a crisis.

The more I think about this, the more I think risk managers in other disciplines have to teach us financial risk folks.

The long view of gold September 24, 2011 at 3:38 pm

The Baseline Scenario (HT Naked Capitalism) has a lovely post on the long term gold price. It asks and answers the question:

…the reason gold is so rare in the crust of the earth? Well, that’s because it’s relatively abundant in the earth’s core. When the earth was cooling, most iron-loving minerals sank toward a massive lump of molten iron down near the core. That includes gold and other heavy metallic elements. So where did the gold we currently have come from? Recent evidence suggests it came from smaller asteroid impacts that were not sufficiently large to break through the existing crust of the Earth…

So that means gold is very rare in the crust of the earth, but not so rare in space. In fact, in space, it might be quite common. A detailed study of the moderate sized asteroid Eros over a decade ago indicated it contained over $20 trillion of precious metals—at 1999 prices.

Back then, gold was around $300. Now it’s $1,800, and other commodity metals have increased in price too. The nominal value of Eros alone is probably now over $50 trillion.

Now anything to do with space is really, really expensive. But if, say, you could grab an asteroid, strap a cheap, slow, chemical rocket (or a solar sail) to it, and move it into Earth orbit, for tens of billions, then you might have something. At that point you can use 1960s technology to send up mining robots, grab decent sized chunks of high quality ore, spray ablative on them so that they do not burn up too much on re-entry, and then guide them down.

That is, of course, a pretty big negative for the long term gold price. But in the short term, with the world lacking a plainly safe reserve currency, I wouldn’t go that aggressively short despite recent events

When a technology dies… July 19, 2009 at 3:42 pm

…you sometimes get a decent monument.

Here are the sound mirrors at Denge, Dungeness.

Sound Mirrors

Control theory and capital May 9, 2009 at 6:13 am

The beginnings of control theory can be summarised as ‘nail it in the right place’. Suppose there’s a plank over a barrel. You can keep the plank level by putting a weight in just the right place, so that it balances.

The problem with this is that any perturbation will cause the plank to swing away from the level. A gust of wind might even do it: the equilibrium is unstable. Therefore control theory 101 would suggest that you move the weight dynamically to keep the plank balanced. If one end swings up, move the weight slightly that way until it swings back.

Over the years, a lot has been discovered about how to control unstable objects moving in unpredictable environments. Modern fighter aircrafts are in some ways a triumph of control theory: without the computers which control their flight surfaces, they would fall out of the sky. And what the computers do is determined by control theory.

One of the many reasons that the current regulatory capital regime is pants (not to put too fine a point on it) is that it is stuck with static control. That is, think of a number, and that’s the amount of capital that you need. In reality, the regime needs to be dynamic: the anticyclical capital of earlier discussions is one piece of this puzzle. What struck me as I walking home from a lecture last night (one which touched in passing on control theory) is that we do not even have the right inputs to develop a control theory of bank capital. That is, we don’t really know what the equivalent of the angle of the plank (or the speed, pitch, yaw and so on of the fighter) is. One can think of some things that it might make sense to monitor, like credit spreads or the availability of interbank liquidity, but so far as I am aware, there has been no systematic study which discusses the indicators of health of the banking system, let alone identifies how they respond to changes in regulation. That would be the basis of a serious control theory of banking.

Gearing up for trouble March 29, 2009 at 7:12 pm

Even by the intolerably low standards of this blog, this one is going to be obscure…

I want to talk about gears. Bicycle gears. For ordinary people, rather than, say, drug crazed Americans who have recently broken their collar bones. So, what does a reasonable rider want from his or her gears?

  • A bottom gear that is low enough to get up most hills. In practice unless you are really fit that means a gear of 42* or lower.
  • A top gear that is high enough that you can pedal going down moderate hills. 100 is plenty.
  • A fine spacing of gears in between.
  • And in particular a relatively gentle change from the little to the big ring at the front.

It doesn’t sound like much, does it? Yet pretty much all standard gear set-ups from the large manufacturers fail on one or more of these criteria. 39/53 or 39/52 at the front gives far too big a change. You want at most a difference of ten cogs, I would suggest, or the change up is too jarring.

To get a bottom gear of 40 with a front ring of 42, you need a big cog on the back of more than 27. You can’t buy one. So that means that 42/52 at the front doesn’t work either.

By this point we have eliminated all of the standard front gears available. What does work is 38/48 on the front, and 13/25 or 13/27 at the back. This gives a top gear of a shade under 100, a bottom gear around 40, and a relatively gentle change between rings. But that requires custom front rings. Why does it have to be so hard?

* The gear in inches is given by dividing the number of teeth at the front by the number at the back and multiplying by the wheel size (usually 27) in inches. The biggest gear is therefore the largest ring at the front through the smallest at the back: the lowest is the small ring at the front through the big ring at the back.

When is biggest best (or why to avoid British PCs) March 17, 2009 at 5:42 pm

A few years ago, in a fit of patriotism, or the desire to support local industry, or something equally foolish, I bought a British-built PC. A Mesh. Goodness, what a mistake. It wasn’t delivered when it was promised. When it arrived, it felt shoddy. And ever since, it has had issues. The CPU cooling fan fell off, and had to be replaced. The CMOS battery ran down. (Did you know that ‘CMOS corrupt’ is an unhelpful error message you get when in fact it is likely that a watch battery inside your computer has given up the ghost?) And now the USB hub seems to have crashed. So, gentle reader, learn from my mistake: buy from someone big enough that they are likely to have got their design right. You don’t want the PC equivalent of a TVR.

Towards Core Stability March 8, 2009 at 10:36 am

No, not a post on my recent engagement with Pilates. Instead I am going to be a little Englander for a moment. This isn’t out of prejudice: it is more a consideration of self sufficiency.

What’s the problem with being an exporter? It is that if your clients stop buying, your economy runs into a wall. Look at Japan.

The problem with being an importer is that it is easy to import inflation.

A measure of self sufficiency therefore has some interest. The problem for the UK is that, with a few exceptions (cars, killing machines) we killed out manufacturing industry, making progress towards self sufficiency very difficult. It also makes our natural shortage of material much worse from a country risk perspective.

Therefore part of any long term financial stability plan should be the revival of manufacturing, especially engineering-based manufacturing, at the expense of financial services. It isn’t impossible: Thatcher only killed manufacturing in the 1980s, and there are still some good engineers left (although many of them are retiring). This story is a tiny ray of light in that regard. But much, much more is needed.

What does safe mean? May 25, 2007 at 9:43 pm

It is an interesting question. Nothing is safe, 100% robust under any set of circumstances. If a two hundred foot high sea monster climbs out of the Thames and starts munching on Canary Wharf a few disaster recovery plans would doubtless be found wanting.

There are at least two issues. The first is to encourage people to be skeptical about the performance of any construction, mechanical, electronic or intellectual: there are some events that will screw up any design.

But then we come to the problem of how to estimate how unlikely these testing circumstances are. Typical operational risk events involve a concatenation of errors, of individually improbable circumstances. Sadly it seems that sometimes these events are not independent so that the joint probability of a screw up is much bigger than one might think. For that matter, the equity, credit, FX and interest rate markets often have low return correlations: but they can all move together in a crisis, as LTCM found out. It isn’t that a plausible worst case is bad — we knew that — it is that the worst case can be much more likely than it appears.

Believing the worst May 17, 2007 at 8:33 pm

Shamelessly stolen from Overcoming Bias:

In 1983, NASA was planning to bring back Martian soil samples to Earth. Contaminating the Earth with alien organisms was an issue, but engineers at Jet Propulsion Laboratories had devised a “safe” capsule re-entry system to avoid that risk. However, Carl Sagan was opposed to the idea and explained to JPL engineers that if they were so certain […] then why not put living Anthrax germs inside it, launch it into space, then [crash the capsule back to earth] exactly like the Mars Sample Return capsule would.

The engineers helpfully responded by labeling Sagan an alarmist and extremist. But why were they so unwilling to do the test, if they were so sure of their system? The answer is probably they feared that if the test failed, their careers would be over and they would have caused a catastrophe. But an out of control Martian virus, no matter how unlikely, would have been equally a catastrophe. However, that vague threat didn’t concentrate their minds like the specific example of anthrax.

Imagine for a moment that those engineers had been forced to do Sagan’s test. Fear of specific disaster would have erased their overconfidence, and they would have moved from ‘being sure that things will go right’ to ‘imagining all the ways things could go wrong’ – and preventing them. The more dangerous the test, the more the engineers would have worked to overcome every contingency.