When we get risk wrong March 21, 2011 at 10:47 pm

I have been involved in risk analysis, measurement and management for more than half of my life. That’s a scary thought, but it does perhaps mean that I might have some insight. It helps (and apologies if this is sounding like a boast; bear with me) that I have worked not just in finance, but also in real time safety critical systems. Not nuclear, admittedly, but nonetheless things that you really don’t want to screw up. Based on that experience, there are two types of error that seem to me to be pretty common in risk analysis.

The first is mis-estimating the probability of an occurence. That is, you have considered that something can happen, but you thought that it was a lot less likely (the error is usually that way around) than it turns out to be. This is often the kind of mistake that we make in financial risk management.

The second is somehow more grievous. This type of error occurs when we fail to consider that something can happen at all. An event – that turns out to be both possible and material when it occurs – is completely ignored in our analysis.

The first error is less worrying simply because it can be hedged, partly at least, by a worst case analysis. Forget the probabilities, just suppose the worst thing that we can think of happens. If that really is the worst thing, and we can live with the outcome, then there is some reason for comfort – unless we have made an error of the second kind. (This, by the way, is why reverse stress tests, where you assume a really bad thing has happened, are so useful.)

I don’t know how you reliably catch errors of the second kind. I could descend into platitudes – hire smart people, avoid group think, look at the problem in as many ways as you can – but they won’t help much. There really are more things in heaven and earth, Horatio, than is dreamt of in anyone’s philosophy, and that makes the problem quite hard.

TED throws up his (carefully manicured?) hands in horror up at this point and suggest a large dose of Carpe Diem. Jonathan Hopkin, meanwhile (perhaps reflecting a slightly less good cellar than TED) takes this as a cue to suggest a healthy dose of conservatism in risk management:

We need to err on the side of avoiding the unthinkable, rather than risking the unthinkable just to get the desirable.

This is entirely sensible of course. But the fundamental problem of figuring out what the unthinkinable is, exactly, remains. This for me is a very good argument against nuclear power at least in its current form. When you are dealing with something that has a half life of more than 700 million years (U235), you can be certain that you can’t think of everything that might happen. Your risk assessment, in other words, is bound to be wrong due to errors of the second kind. And with something as dangerous as Uranium and its byproducts, that is a problem.

Update. FT alphaville has interesting coverage of a presentation about storage of fuel rods at the Fukushima reactor here. They say:

it’s once again not quite clear if anyone is really taking account of the full storage time needed for these fuels.

Off-site storage for Fukushima coming online in 2012 will store 20 years of fuel for 50 years. And then? You can count many fuels’ half-lives in the 10,000s of years.

That, by the way, is very conservative. To be safe you need to go to at least ten half lives. That’s 7 billion years for U235, and 40 billion for U238. Humanity has not built anything that has lasted 10,000 years, let alone a billion. We are, in other words, at least a hundred thousand times less skilled than we need to be to take on nuclear fuel storage safely.

3 Responses to “When we get risk wrong”

  1. From the perspective of someone who’s done a lot of software development I think the safest way to avoid the second type of error is to reduce complex systems to simple, reliable and predictable components that have simple, low-dimensional interfaces between each other. I think the ultimate triumph of this approach to scaling complex system is the microprocessor.

    It uses very simple ultra-tested standardized components (going from transistors, to NAND gates, to latch memory, to arithmetic units, to pipelines) that are repeated endlessly. Rather than trying to customize each component, pure efficiency is sacrificed for high reliability and well-understood and simple functionality. So even though a microprocessor has very extensive and complex functionality the total amount of information that needs to be understood about its workings (comparatively) very little . An undergraduate can learn how to design a fully functioning CPU in a single class.

    Unfortunately the financial system is pretty close to the exact opposite end of the spectrum. Ideally each asset return distribution would have completely orthogonal risk components or a very simple relation to other assets. In reality the relation between assets and players in the market is highly myriad, idiosyncratic, non-standardized, reflexive, and ever-changing. In contrast to the microprocessor to fully describe the financial markets a huge, functionally infinite amount of information is needed (which is why alpha never goes away).

    There might be other ways to make complex systems more reliable and predictable, but I really don’t know any way besides the tried and true method of composing them out of standardized, reliable, simple components.

  2. Doug

    Thank you, that is most insightful. I agree completely. The only thing that has allowed us to build such extraordinarily complex systems as the current generation of microprocessors is the wonderful idea of separation of concerns. Keep the individual components simple, their interfaces uniform, and the joint behaviour a simple function of the behaviour of the pieces, and you can build big with success. Don’t, and you will never figure out what an even moderately complex system will do.

  3. Right about the cellar, sadly…