I have been involved in risk analysis, measurement and management for more than half of my life. That’s a scary thought, but it does perhaps mean that I might have some insight. It helps (and apologies if this is sounding like a boast; bear with me) that I have worked not just in finance, but also in real time safety critical systems. Not nuclear, admittedly, but nonetheless things that you really don’t want to screw up. Based on that experience, there are two types of error that seem to me to be pretty common in risk analysis.
The first is mis-estimating the probability of an occurence. That is, you have considered that something can happen, but you thought that it was a lot less likely (the error is usually that way around) than it turns out to be. This is often the kind of mistake that we make in financial risk management.
The second is somehow more grievous. This type of error occurs when we fail to consider that something can happen at all. An event – that turns out to be both possible and material when it occurs – is completely ignored in our analysis.
The first error is less worrying simply because it can be hedged, partly at least, by a worst case analysis. Forget the probabilities, just suppose the worst thing that we can think of happens. If that really is the worst thing, and we can live with the outcome, then there is some reason for comfort – unless we have made an error of the second kind. (This, by the way, is why reverse stress tests, where you assume a really bad thing has happened, are so useful.)
I don’t know how you reliably catch errors of the second kind. I could descend into platitudes – hire smart people, avoid group think, look at the problem in as many ways as you can – but they won’t help much. There really are more things in heaven and earth, Horatio, than is dreamt of in anyone’s philosophy, and that makes the problem quite hard.
TED throws up his (carefully manicured?) hands in horror up at this point and suggest a large dose of Carpe Diem. Jonathan Hopkin, meanwhile (perhaps reflecting a slightly less good cellar than TED) takes this as a cue to suggest a healthy dose of conservatism in risk management:
We need to err on the side of avoiding the unthinkable, rather than risking the unthinkable just to get the desirable.
This is entirely sensible of course. But the fundamental problem of figuring out what the unthinkinable is, exactly, remains. This for me is a very good argument against nuclear power at least in its current form. When you are dealing with something that has a half life of more than 700 million years (U235), you can be certain that you can’t think of everything that might happen. Your risk assessment, in other words, is bound to be wrong due to errors of the second kind. And with something as dangerous as Uranium and its byproducts, that is a problem.
Update. FT alphaville has interesting coverage of a presentation about storage of fuel rods at the Fukushima reactor here. They say:
it’s once again not quite clear if anyone is really taking account of the full storage time needed for these fuels.
Off-site storage for Fukushima coming online in 2012 will store 20 years of fuel for 50 years. And then? You can count many fuels’ half-lives in the 10,000s of years.
That, by the way, is very conservative. To be safe you need to go to at least ten half lives. That’s 7 billion years for U235, and 40 billion for U238. Humanity has not built anything that has lasted 10,000 years, let alone a billion. We are, in other words, at least a hundred thousand times less skilled than we need to be to take on nuclear fuel storage safely.