Category / Risk Management

Risk management quote of the week January 27, 2014 at 5:30 am

Well, of six weeks ago actually, but it is still good:

Safety is a product, not a process.

It is being said in the industrial accident context. I’ll let the Ranter explain:

In general, effective safety measures are usually something you do, and scattering costly “devices” around an unchanged process is a classic failure mode. Not least because they might instil a false sense of safety and lead people to take risks…

Accidents cost money, in the same way that quality failures cost money. At the very least, in the most cynical 19th century Yorkshire mill-owner’s view, they cause downtime, quality problems, and damage to expensive equipment. In a less cynical and more general sense, accidents are just one of the sources of excessive variability in the production process, like late change requests, tools whose tolerances are too large, or a virus outbreak among the Windows boxen. If accidents are happening, this is a symptom of problems with the process.

Reworking production processes to eliminate the sources of variability is precisely what industrial managers are meant to do all day.

In other words, risk management is not an overlay, it is what you should be doing all the time.

He ain’t ergodic, he’s my brother November 17, 2013 at 1:57 pm

I have been meaning to blog for a while about ergodicity. I know, exciting stuff. Here’s the skinny:

Roughly speaking, a system is ergodic if the time average is the state space average. Suppose we have a financial asset with genuinely IID returns: then if we look at the average return over time, that will be same as the average return over all possible paths.

The key point here is that computing the phase space average requires that we can reasonably take multiple copies of the system and observe different paths. A coin, say, can be tossed multiple times, allowing us to see the whole phase space.

For most financial systems, this copyability is not present. It might be reasonable to attribute a probability of default to a company a priori, for instance, but a posteriori it either defaults or it doesn’t, and we cannot take multiple copies of it and see how many times it does in repeated experiments. All we can do is look at it through time.

Given that we can’t often measure a phase space average, it would be handy if many financial systems were ergodic. Unfortunately, as this Towers Watson post points out, they often aren’t.

Risk managers therefore need to be very careful to distinguish two situations:

  1. I have a lot of genuinely independent bets going at once; and
  2. I have one bet that I repeat multiple times.

The former might, for instance, be hedging lots of different single stock options (on uncorrelated stocks, not that there are such things); the latter would be hedging a rolling position on one stock. In the first case you can reasonably take the phase space average – so if I sell the options for more than the historic volatility and I have enough of them, I will on average make money. In the second, you can’t. Here running out of money/hitting your limits and being forced to close out are much bigger issues.

Do read the PDF linked to in the Towers Watson post for more: it’s insightful.

Pausing (not Rushing) September 30, 2013 at 10:17 pm

I’m doing a big piece of analytical work at the moment, and it’s swallowing blogging time, so my apologies but DEM is going to be quiet for a while. At the moment for instance I don’t have much more to say than that it is amazing how much like James Hunt Chris Hemsworth can be made to look, and that I’m currently unconvinced by certain aspects of Hull & White volatility scaling. Which of these two topics gets blogged about first is a toss up right now…

James and Chris

The curious case of the risk floor April 28, 2013 at 5:08 pm

Karl Smith has a theory:

… lets imagine a simple model where we have two sources of risk. There is background you cannot avoid. And, there is personal risk that you create by through your own choices.

Policy makers have since Thomas Hobbes been attempting to drive down background risk. They have larger been successful. As a result our lives are getting more and more stable.

As that happens, however, folks are going to tend to take on more personal risk. There is a tradeoff between risk and reward. As you face less background risk, for which you not rewarded it makes sense to go for more personal risk for which you are rewarded.

When I take on more personal risk, however, it bleeds over slightly into everyone else’s background risk. People depend on me. If I take risks and lose so big that I debilitate myself then my family and my friends will surely suffer. But, so will my employer, my creditor and the businesses who count on me as a regular. When I go down, they go down.

So, putting it all back together and we come up with something of a risk floor, if you will.

Now, I should say at once that I don’t wholly buy this. But it is an interesting idea, and there is some evidence to support it. For instance, Australian research on compulsory cycle helmets suggests that cyclists that feel safer as a result of their helmet take more risk, resulting in little change in cyclist mortality* despite the new policy. However, it is not obvious that we can generalise from evident physical danger to financial risk.

Suppose we can though. That would mean, as Smith implies, that risk reducing policies can, if we are near the floor, cause risk to pop up again in a form that might be harder to spot. That suggests that a polluter pays approach, where we try to charge for the risk being taken rather than prevent it. Direct fees to price systemic externalities, then, rather than capital to prevent them. One might imagine that if FDIC deposit insurance fees were truly fair, then they would comprise a floor element plus a systemic surcharge (which was at least quadratic in bank size). Such an approach would attempt to charge for the cost of failure rather than capitalising the risks that might lead to it. As I say, I’m not necessarily recommending it, just suggesting that it is an interesting alternative.

Such laws are however good at discouraging cycling.

Enhancing the Risk Disclosures of Banks November 13, 2012 at 5:25 pm

I have been reading a useful and timely report on enhancing bank risk disclosures. Their objectives are sensible, and seven fundamental principles are suggested:

  1. Disclosures should be clear, balanced and understandable.
  2. Disclosures should be comprehensive and include all of the bank’s key activities and risks.
  3. Disclosures should present relevant information.
  4. Disclosures should reflect how the bank manages its risks.
  5. Disclosures should be consistent over time.
  6. Disclosures should be comparable among banks.
  7. Disclosures should be provided on a timely basis.

As many commentators (notably Bloomberg’s Jonathan Weil) have pointed out, we are far from this world right now.

The report goes on to give a lot of reasonable detailed recommendations. This is what they have to say on overall capital requirements disclosures, for instance:

[Banks should] Present a table showing the capital requirements for each method used for calculating RWAs for credit risk, including counterparty credit risk, for each Basel asset class as well as for major portfolios within those classes. For market risk and operational risk, present a table showing the capital requirements for each method used for calculating them. Disclosures should be accompanied by additional information about significant models used, e.g. data periods, downturn parameter thresholds and methodology for calculating loss given default (LGD).

And here is the first of four paragraphs on market risk:

Provide information that facilitates users’ understanding of the linkages between line items in the balance sheet and the income statement with positions included in the traded market risk disclosures (using the bank’s primary risk management measures such as Value at Risk (VaR)) and non-traded market risk disclosures such as risk factor sensitivities, economic value and earningsscenarios and/or sensitivities.

It sounds elementary – of course you would want that – but it is a measure of how far banks’ disclosures fail to meet the standard of `what a reasonable person trying to understand the firm would ask’ that I cannot think of a single large bank today that meets that requirement. There is a lot of information in annual reports and Basel pillar 3 documents, but there is a lot that is missing too. These recommendations are a very good step towards filling in the gaps.

Banks will of course push back on this. The last thing that most of them want is (in the words of paragraph 26) to `provide information that facilitates users’ understanding of the bank’s credit risk profile, including any significant credit risk concentrations’. That is short sighted: investors would trust banks more if they could understand them. The reason that many trade below book is their opacity, and enhanced disclosure is the only solution to that.

Getting fundamental September 4, 2012 at 6:01 am

My response to the fundamental review of the trading book is here.

Culturally neutral risk reporting August 6, 2012 at 9:17 am

Last week, I mused a little on the cultural theory of risk and its consequences for financial risk management. Today I want to say something about risk reporting in that context. Specifically, I would suggest that one wants risk reporting that meets the needs and attitudes of all four cultural groups. That means

1. Individualists want ‘ordinary conditions’ risk reporting; things like 95% 1 day VAR. They will also focus much more on the P/L than on risk measures.

2. Egalitarians want to see different risks treated fairly, so they are a valuable resource in ensuring that the risk framework doesn’t give unfair advantages to some businesses. They will want to see a range of stress/scenario tests reflecting their belief in the fragility of ‘ordinary’ conditions.

3. Authoritarians want a strongly-policed and comprehensive limit framework, capital plans, and so on. For them it is all about risk-process-as-constraint. Focus your authoritarians on enforcement not design.

4. Fatalists are hard to please because they think (probably rightly) that no risk framework can avert disaster all of the time. However, giving them notional measures (if everything goes completely wrong we can still only lose x) and highly pessimistic stress tests helps. Their skepticism is a very valuable tool, and you want at least one prominent fatalist in any high level risk committee.

Construed this way, risk reporting can be a cross cultural communications tool. It needs to cater to all four attitudes if it is to be effective in that role. The key is to ensure that as one or other attitude becomes culturally dominant, the risk framework does not become too distorted.

The institutional consequences of the cultural theory of risk August 4, 2012 at 8:20 am

Yesterday, we outlined four attitudes which the Cultural Theory of Risk advances as a fundamental classification. Today, I want to look at what that classification suggests organizationally.

David Ingram suggests (1) that you find

Individualists in Sales/Underwriting/Trading. They tend to be paid with a high proportion of incentives or bonuses. They prefer to get paid for what value that they bring to the firm. They will frequently argue with the nit pickers and bean counters about how good the deals that they do will be for the company.

Fatalists in Operations and IT. Their priorities are frequently changed without their knowledge. Many firms tend to value the flexibility of Fatalists who do not expect things to stay steady and predictable anyway. Fatalists in a firm are quite happy with a job where they do not know in advance what they will be doing from day to day. You probably want a Fatalist on your help desk.

Egalitarians in Compliance, Internal Audit, ALM and some CFO and Legal functions.
Egalitarians will tend to keep to themselves within the firm and have few connections with the other areas. They tend to think that the company is going into decline, but that their department is run well and things would be much better if people just listened to their group more.

Authoritarians populate the risk management area and are commonly CFOs. When there is an Authoritarian CEO or powerful authoritarian senior administrative officer, the firm will usually have a very organized planning process with regular update to short and long-term plans. The emphasis of Authoritarians in management will be to set goals and measure progress against those goals.

Now, if we read this as a broad tendency rather than a prescription, I think he has a point. Certainly the CFOs I have known tend to be authoritarian (sometimes sufficiently so as to be to the detriment of their firms), and you certainly want someone who believes in fairness and the rule of law in compliance. Where I think Ingram goes wrong, though, is in a statement that he gives emphatically

Enterprise Risk Management is clearly an Authoritarian risk perception

It can be, but it does not need to be. Risk managment infrastructure should be a decision making aid, no more. At its best, it provides good information – quantitative and judgemental – which allows management to decide on which risks to take and which to hedge.

If objective risk measures are used in a purely authoritarian way, two things happen. First, the firm will miss out on opportunities that don’t look good according to the particularly risk framework. Individualists will see this, and some will leave in frustration. Second, other sneakier individualists will try to game the system. Sometimes they will succeed, and that can lead to disaster.

Firms with better risk management implicitly understand this. Their risk philosophy is a blend of egalitarian (let’s create a fair risk framework, including human decision judgement) and fatalist (how are those bastards trying to screw with me today?). Yes, sometimes you need an authoritarian to say ‘no, you can’t go over your risk limit’. But if that is all you have, you are in trouble; although perhaps slightly less trouble than if risk management is run by a bunch of fatalists (2). Risk management, as with so many other things, is better with if it has more cultural diversity.

(1) His discussion is based on insurers; I have adapated it slightly to be more bank-relevant.

(2) There is a school of thought that risk managers are simply short out of the money puts on the P/L, and that their aim in life is to be employed long enough to collect sufficient premium. This is an essentially fatalist view of the role (which is not entirely without basis).

Cultural attitudes to risk August 3, 2012 at 11:18 am

There is a body of work known as the cultural theory of risk which identifies four attitudes towards risk taking. Like any sociological classification it is not precise, but nevertheless it is both interesting and insightful.

There have been a number of articles on this in Wilmott magazine: the following is a paraphrase of one by David Ingram.

Cultural Theory suggests that there are four ways that people approach risk:

1. Individualists believe that the world is self correcting… They believe in unfettered capitalism – self-regulating markets… That individual effort and imagination will create more for everyone (rising tide lifts all boats – growing the pie before you divide it).

Personally I quibble a bit with the ‘self-correcting’ part as I think that many people who intuitively seem to be individualists do not believe in mean reversion, but we will let that pass.

2. Egalitarians believe that the world is in a delicate balance. Any major change could result in disaster. Egalitarians focus on fairness and dividing the pie. Egalitarians are frugal.

Again, the ‘delicate balance’ part for me isn’t necessary; the key criteria to be an egalitarian is a belief in fairness.

3. Authoritarians believe that risk taking is acceptable only if controlled by experts. There is a need for rules and laws to keep risk taking under control.

4. Fatalists believe that the world is unpredictable and uncontrollable. Fatalists are those folks who will not conform to the rules of the Hierarchists, who cannot muster the fervor to become members of an Egalitarian group and who do not have the ambition to strike out on their own as an Individualist. They are outsiders and seldom control things.

Tomorrow, I want to say something about the consequences of these attitudes for the financial risk management. Meanwhile let me leave you with a link or two for further reading should you be interested.

Understanding Jamie Dimon’s Testimony: the strange case of CRM June 13, 2012 at 9:46 am

In the theory of programming languages, you learn that parsing is a syntactical operation that doesn’t require any analysis of meaning. Therefore I shouldn’t complain that dealbook’s recent post, Parsing Jamie Dimon’s Testimony, doesn’t inform as, really, it doesn’t promise that it will. It does, though, set up enough clues that you can guess a little more of the JPMorgan story.

Here are the key pieces.

  • Dimon said:

    In December 2011, as part of a firmwide effort in anticipation of new Basel capital requirements, we instructed CIO to reduce risk-weighted assets and associated risk. To achieve this in the synthetic credit portfolio, the CIO could have simply reduced its existing positions; instead, starting in mid-January, it embarked on a complex strategy that entailed adding positions that it believed would offset the existing ones. This strategy, however, ended up creating a portfolio that was larger and ultimately resulted in even more complex and hard-to-manage risks.

  • The new Basel capital requirements will require a bank like JPM to calculate capital for the correlation trading portfolio using a new type of internal model, a CRM or comprehensive risk model.
  • CRM models operate on a portfolio basis, and will recognise partial risk hedging. Therefore if you have a position and want to reduce capital somewhat but keep some of the risk, you can do that by ‘adding positions that [you believe]would offset the existing ones’.
  • CRM models do not include investment positions, so if the broad theory that JPM was long deposits, long corporate credit risk to invest then, then using synthetic credit positions to protect the crash risk of the bonds is correct, the CRM model would only have included the last of these positions. Thus JPM would have had both an accounting and a capital mismatch: depos and bonds accural accounted and capitalized in the banking book; protection fair valued and CRM-modelled in the trading book.
  • It seems a reasonable theory then that JPM was trying to address this mismatch by modifying its positions to reduce future regulatory capital (and accounting volatility) while still keeping their essential nature as crash hedges. The modifications introduced extra risk which caused the $2B hole.

That’s my current best guess; I await Jamie’s congressional testimony with interest.

Floating carcus ahoy May 11, 2012 at 9:20 am

When Magellan emerged from the strait that bears his name into the Pacific ocean, he thought that he was only a few days sailing from Portugal and home. Good try, but no cigar. A similar navigational issue seems to be plaguing folks over last night’s $2B JPMorgan loss. Here are some things we can, and cannot conclude from this ‘egregious’ loss.

Update. FT alphaville makes a similar point about the difficulty of identifying a ‘good’ hedge here.

Interest rate risk in the banking book – a little bit hidden? April 24, 2012 at 11:01 am

A typically histrionic post on Naked Capitalism about interest rate risk in the banking book gave me pause for thought. (Don’t you wish there was a browser plugin that could turn down a website automatically; it would substitute ‘unexpected inconvenience’ for ‘hidden time bomb’ for instance… Also, NC, for reference, the duration of a bond does not increase ‘exponentially’ as the coupon rate falls.)

Their point is that at some point rates in the US will likely rise, and that this will have an impact on bank earnings. That’s true. The direct impact however is both minor and well-disclosed. Here for instance is the relevant table from Citi’s 2011 annual report, showing the impact of 100 bp move in rates.

Citi IR in the BB

These disclosures are mandated in Basel 2, and clearly the numbers are not material for Citi.

Note too that this disclosure should include both behavioural effects on mortgages and their hedges, so that you will see some convexity for large rate shocks in the mortgage book (which a bank might well hedge with caps or swaptions, for instance). These behavioural effects include prepayment behaviour and fixes of mortgages with front end floating periods that flip (or can be converted) to fixed. To get a feel for the convexity, I would also like to see Citi disclose the impact of a 200 bps move.

What is also not there, and what may be an issue, is the impact of rates on the credit risk in mortgages. Rising rates cause extension in mortages and that in turn means that the borrower is on risk for longer. In a poor credit environment with a foreclosures still going at a good rate, being on risk for longer is a bad thing. The estimated increase in provisions under the different rate scenarios would therefore be another useful additional disclosure. Still, ‘rate risk = hidden time bomb’ feels seriously overblown to me.

Group think: FSA edition July 18, 2011 at 9:49 am

I am sitting in a deeply disappointing FSA seminar on risk management. It’s disappointing because of the failure to take responsbility. I’ve heard the usual analysis of losses in the crisis. They said, as usual (because it is true) that mortgages played a key role. But rather than asking the obvious question – what were the capital rules for mortgages? – the room has been subjected to an injunction to do risk management better. How about we do regulation better instead? Why not let’s try fixing the massive Basel 2 incentive to take mortgage risk in the banking book. Why not fix the capital charges for supersenior ABS which are still, 3 years after the crisis, very generous. Why not strip away the ridiculous complexity and double counts of the trading book rules and produce a simple, risk sensitive framework that capitalizes the risk that matters – that of a systemic crisis – properly while not hitting risks that don’t matter far too hard? Oh, wait, but that would involve Basel committee members admitting that they were wrong. OK then, as you were.

Update. The FSA also had the audacity to criticise the progress firms had made in risk management since the crisis. Now if only they had given those same firms a moment’s relief from new regulations; if only 99% of what a risk manager does in a bank these days was not mandated by the regulator: well, then that criticism might be fair. But if you want a man to dance a jig it helps to take your boot off his throat first.

Hong Kong Phooey? July 13, 2011 at 5:04 pm

From the FT:

Hong Kong’s exchange has admitted that risk management and financial resources at its clearing house are not up to international standards.

Ah. Oops.

Nuancing HFT July 6, 2011 at 12:18 am

The Streetwise Professor responded to my post on HFT with a number of good points.

I am skeptical that requiring trading on a central order book (CLOB) will eliminate flash crashes. Note that flash crashes have occurred on futures markets with CLOBs.

Fair enough. But with a CLOB, you can suspend all trading at once, you have some hope of definining what best execution means, and you can impose behavioural restrictions in one place. Now granted you can do all of those things with some difficulty in more diverse market infrastructures, but they are a lot harder.

I’m also skeptical that dark pools have anything to do with flash crashes, and am also not convinced that the dark pool structure–which as DEM notes, has effectively supplanted the block market for facilitating large trades–is detrimental to the interests of those who want to trade in size.

What dark pools have done is dramatically reduced the average size of trade, and forced everyone to cut even retail sized positions into multiple trades, while obscuring what the current price is, precisely. In the old market maker system, the market maker provided block trading capacity without loss of transparency to the market. My main problem with dark pools is exactly that they are dark.

I suspect I’m not too far from TSP here though, as he goes on to say

Top-of-book protection provides a way for HFTs to make money by exploiting fragmentation of liquidity. Creating a real CLOB entails a whole set of issues … but these can be avoided through the creation of a virtual CLOB that protects the entire books of the various liquidity pools.

And that, frankly, would do provided that you can also suspend trading, protect best execution, and so on (which you can, with effort).

One of the suggestions which I stand by, and which TSP doesn’t like, is a minimum quote time for orders. He says:

… restrictions on quoting–e.g., DEM’s suggestion that quotes be good for one second–could well be counterproductive in that regard. The current evidence suggests that order floor became much more toxic on 6 May 2010, and that’s why HFTs stopped supplying liquidity. Forcing them to keep their quotes good for longer than they would choose to on their own in such circumstances makes it more likely they will not quote at all.

Two things. First, very short term feedback loops are a key component of flash crash like behaviour. If the bots can’t react that fast, they can’t crash that fast. Crashes must happen on human timescales if at all so that we have the chance to intervene. The sand-in-the-machine of requiring quotes to be good for half a second is vital to preventing rapid phase changes in market behaviour: emergent behaviour is much less common in discrete than continuous systems.

Second, because you can’t put out an order for a few shares good for a millisecond, you can’t make as much money per second on a given order size. That means that you have to quote in bigger size to make money. Instead of cents per order, you have to make hundreds of dollars; hence, quotes in bigger size, which is good for market liquidity. Now, will anyone volunteer to explain ‘emergent dynamics’ to the SEC?

Slippery when wet: arbitrage channels and market efficiency June 24, 2011 at 6:13 am

How much frictional cost do you want in a market?

Just enough, obviously.

The question and answer were brought to mind by a recent speak by Myron Scholes reported in Risk magazine. He warns:

“If you restrict or require more capital of banks, what will happen is that they have to wait until the deviations [in price] get larger before they intermediate, because they have to make a return on the capital they are employing,” he said. “As intermediary services stop, markets then become more chaotic.”

Scholes is right. The all-in cost of a trade depends on its capital usage. If banks have to hold more capital, fewer trades are profitable after the cost of capital is included, so fewer trades happen. Thus markets are less well arbitraged and hence less efficient (unless other players step in, at least).

If frictional costs are very low then you get a huge amount of trading – as in HFT – and that in itself is destabilizing. But if they are too high, you get inefficiency. I would love to see some academic work on where the sweet spot is.

Nathan Myhrvold on Extreme Events June 21, 2011 at 12:44 pm

From a wise Bloomberg column (HT FT Alphaville):

Complacency is baked into our species. We can’t resist thinking that recent experience defines the future. Give us a run of good luck, and we are apt to turn that into an implicit expectation that our luck will continue — even that we are entitled to it.

This kind of thinking was instrumental in the run-up to the financial crash of 2008. Too many private and public institutions assumed that an extraordinary run in prosperity, particularly in the real estate market, was just normal. It didn’t occur to them that things could go so wrong. Even when token stress testing or risk assessment was done, it largely excluded the possibility of a bad shock or a protracted slump. Risk wasn’t systematically measured; it was ignored…

Mistakes are common in big natural disasters. If such events happened frequently, response teams and the people who direct them would have practice, as trauma teams in hospital emergency rooms have. Unfortunately, responders rarely get the opportunity to rehearse large-scale disasters. When the inevitable finally occurs, and a tsunami hits a nuclear reactor, or an earthquake reduces much of a major city to rubble, the people in charge often are caught napping and react ineptly…

The lesson is simple to state, but hard to follow: Risks with heavy consequences must be taken seriously even if the probability of their happening tomorrow is low. That means incurring small, but real, costs in the here and now to mitigate the damage from some disaster that may lie far in the future — even though “far” means beyond the present budget cycle, after the current chief executive officer has retired, or after the current politicians have left office. Only by thinking and training in anticipation of the inevitable can authorities avoid getting blindsided.

When we get risk wrong March 21, 2011 at 10:47 pm

I have been involved in risk analysis, measurement and management for more than half of my life. That’s a scary thought, but it does perhaps mean that I might have some insight. It helps (and apologies if this is sounding like a boast; bear with me) that I have worked not just in finance, but also in real time safety critical systems. Not nuclear, admittedly, but nonetheless things that you really don’t want to screw up. Based on that experience, there are two types of error that seem to me to be pretty common in risk analysis.

The first is mis-estimating the probability of an occurence. That is, you have considered that something can happen, but you thought that it was a lot less likely (the error is usually that way around) than it turns out to be. This is often the kind of mistake that we make in financial risk management.

The second is somehow more grievous. This type of error occurs when we fail to consider that something can happen at all. An event – that turns out to be both possible and material when it occurs – is completely ignored in our analysis.

The first error is less worrying simply because it can be hedged, partly at least, by a worst case analysis. Forget the probabilities, just suppose the worst thing that we can think of happens. If that really is the worst thing, and we can live with the outcome, then there is some reason for comfort – unless we have made an error of the second kind. (This, by the way, is why reverse stress tests, where you assume a really bad thing has happened, are so useful.)

I don’t know how you reliably catch errors of the second kind. I could descend into platitudes – hire smart people, avoid group think, look at the problem in as many ways as you can – but they won’t help much. There really are more things in heaven and earth, Horatio, than is dreamt of in anyone’s philosophy, and that makes the problem quite hard.

TED throws up his (carefully manicured?) hands in horror up at this point and suggest a large dose of Carpe Diem. Jonathan Hopkin, meanwhile (perhaps reflecting a slightly less good cellar than TED) takes this as a cue to suggest a healthy dose of conservatism in risk management:

We need to err on the side of avoiding the unthinkable, rather than risking the unthinkable just to get the desirable.

This is entirely sensible of course. But the fundamental problem of figuring out what the unthinkinable is, exactly, remains. This for me is a very good argument against nuclear power at least in its current form. When you are dealing with something that has a half life of more than 700 million years (U235), you can be certain that you can’t think of everything that might happen. Your risk assessment, in other words, is bound to be wrong due to errors of the second kind. And with something as dangerous as Uranium and its byproducts, that is a problem.

Update. FT alphaville has interesting coverage of a presentation about storage of fuel rods at the Fukushima reactor here. They say:

it’s once again not quite clear if anyone is really taking account of the full storage time needed for these fuels.

Off-site storage for Fukushima coming online in 2012 will store 20 years of fuel for 50 years. And then? You can count many fuels’ half-lives in the 10,000s of years.

That, by the way, is very conservative. To be safe you need to go to at least ten half lives. That’s 7 billion years for U235, and 40 billion for U238. Humanity has not built anything that has lasted 10,000 years, let alone a billion. We are, in other words, at least a hundred thousand times less skilled than we need to be to take on nuclear fuel storage safely.

When does macropru trump micropru? December 12, 2010 at 6:06 am

Let me explain. Macroprudential regulation is about the whole system, and ensuring its stability, while microprudential regulation is about protecting individual firms. As this note puts its

Here is an example of macro-prudential concerns. Selling an asset when it appears to be risky may be considered a prudent response for an individual bank and is supported by much current regulation. But if many banks do this, the asset price will collapse, forcing risk-averse institutions to sell more and leading to general declines in asset prices, higher correlations and volatility across markets, spiraling losses, and collapsing liquidity. Micro-prudential behavior can cause or worsen systemic risks. A macro-prudential approach to an increase in risk is to consider systemic behavior in the management
of that risk.

A good example of this is the series of Irish repo haircuts. Microprudentially, LCH Repoclear is doing the prudent thing. It is protecting itself. But macroprudentially a systemwide increase in margin on an already distressed asset class can be undesirable. Certainly anything that makes lots of people want (or be forced to) sell at once is a bad thing. Sometimes, then, macroprudential concerns must trump microprudential ones.

I take a different line to Avinash Persaud here. He says:

Macro-prudential regulation is about encouraging different behaviour than a prudent firm would follow, wherever this prudential behaviour could undermine the financial system if followed by everyone.

For me it is more about making sure that the prudent behaviour is not necessary in the first place. Thus in the repo case if haircuts initially were bigger than it would not be as necessary to increase them.

Macroprudential regulation, then, does not take asset price bubbles and subsequent busts as a given. It attempts, albeit falteringly, to make them less likely. Constraints on leverage are part of this (and after all a repo haircut is just that – a constraint on leverage). So too are interventions into market dynamics, for instance by requiring all stock quotes to be good for a minimum of half a second (something that would dramatically lower the risk of HFT with minimal impact on lower frequency trading). A more dramatic example would be banning firms from making dramatic changes in margin or risk levels, thus forcing them to be more prudent from the beginning. It is by no means clear that this is required: we do not know enough about macroprudential regulation yet to say.But it might be and, if it is, such an intervention would mark the coming of age of macropru. It’s only a young prince so far, callow and naive, but it has the potential one day to be the king of regulation. My only fear is that microprudential regulation is playing, if not the role of evil usurper, then at least that of Falstaff.

Safety through accidents December 1, 2010 at 12:45 pm

The Psy-fi blog has a nice post on the desirability of having small errors in your risk systems to keep people alert (HT FT Alphaville). They begin by introducing the Titanic Effect:

If you’re sailing icy seas you’d generally want to keep a watchful eye open for icebergs. Unless, of course, you’re in an allegedly unsinkable ship, in which case you’d probably prefer to opt for a spot of partying and an early snooze on the poop deck instead. The craft’s designers will likely not have bothered with wasteful luxury items like lifebelts, emergency flares or lifeboats either: what would be the point?

Thus belief in safety produces behaviour which, if the belief is incorrect, is highly dangerous. A small accident, not enough to sink the ship, but enough to remind people that ships do sink, would reduce risk taking. In financial terms it means being alert to, and managing, the loss given (bad thing) as well as reducing the probability of the bad thing happening.

The Psy-fi blog suggests that

The real route to safer systems is to make sure that they’re not safe at all without human intervention: which is always true anyway, but oft-times needs positive reinforcement.

One example of a mechanism for keeping people alert I saw recently involved a procedure whereby banks had to contribute prices on a financial instrument. The common good was best served by good prices but everyone saw the average and so at the margin had no incentive to be accurate themselves. The answer was to put the outlier firms into a real trade based on their price. Thus the lowest offers and the highest bids were actually executed, in small size. The resulting losses kept banks on their toes.

Well designed stress tests can have the same effect, forcing people to look at the consequences of an event that is not foreseen by the risk system. The problem is to avoid the resulting set of controls itself being seen as safe. In truth, good risk management is often best served by a large dose of paranoia about the performance of any man-made system.