Category / Accounting

Should banks reserve through-the-cycle? January 20, 2014 at 9:56 am

The elephant in the accounts is often loan loss reserves, those oh-so-easy-to-manipulate, oh-so-big (if not actually big-eared) amounts that often drive bank earnings. If you though derivatives valuation was dodgy, welcome to the loan book. The most recent, if not the most egregious examples are surveyed in a recent Bloomberg post:

More than 31 percent of JPMorgan’s 2013 earnings, or $5.6 billion, and about 10 percent of Wells Fargo’s, $2.2 billion, weren’t really earned last year. That money came instead from the banks’ so-called loan-loss reserves… [Bank of America] has received the biggest boost from releasing reserves: The move helped it turn $11.8 billion in losses since 2010 into $11.4 billion in profit. Citigroup, which reported $40.4 billion in net income over that time, would have booked about half that amount without the accounting benefit.

This cuts the other way, too. Bank of America would have reported almost $55 billion in profit in 2009 if it weren’t for the $48.6 billion it put back into reserves that year.

This happens of course because loan loss reserves are annual estimates, and things change from year to year. There are proposals to move to provisions which would reflect losses expected over the life of the loan. This ‘through the cycle’ approach might be more stable, and would certainly result in higher levels of provisions, but they won’t hit any time soon*. Perhaps supervisors should give up on the accounting standards setters and set robust standards for regulatory through the cycle EL provisions?

*The current state of play from FASB is : “The Board discussed the next steps on the credit impairment project and decided to continue to refine the Current Expected Credit Loss (CECL) model in the proposed Accounting Standards Update, Financial Instruments—Credit Losses (Subtopic 825-15)… The Boards will continue redeliberations on the CECL model, considering feedback received through comment letters and outreach activities on Exposure Drafts issued.” Dynamic, huh?

Boring, mostly easy, and oh-so important October 19, 2013 at 11:12 am

Matt Levine, who had fallen off a bit at Bloomberg from his stellar performance at Dealbreaker (something I suspect has more to do with Bloomberg than him), bounces back with a lovely column on the case of John Aaron Brooks‘ mis-marking:

Brooks was mismarking Nymex ethanol futures, which trade on an exchange (the Nymex) and have a reported settlement price every day… The difference [between Brooks’ marks and the exchange’s] starts at 0.89 percent (two cents) and goes up from there, exceeding 10 percent (25 cents or so) every day from March 9 onward and reaching a hilarious peak of 105.16 percent — Brooks’s mark was $4.77, versus a settlement price of $2.325 — on Sept. 30, 2011, a few weeks before Brooks was caught.

As Matt points out, this reflects much worse on his employer than it does on Brooks. There is a daily settlement price every day published by the exchange. It isn’t a secret. All you need to do to check the mark is to read the exchange’s website. Which brings me to the title: checking valuations is often boring, mostly easy to do*, and incredibly important. It is amazing how often banks fail to get even the easy parts right.

*Although it can be very hard.

Unreliable journalism in commenting on derivatives valuation August 15, 2013 at 10:30 am

Peter Eavis has a Dealbook post which, sadly, reiterates many of the common misunderstandings about derivatives valuation. Let’s start from first principles, and try to understand what is really going on here.

Why are we trying to fair value a derivatives book? First, to create a reliable P/L which accurately reflects the value attributable to security holders. We want the right earnings so that current equity holders are properly compensated for the risk they are taking, for instance. Second, earnings volatility is the paradigmatic definition of risk, so we want earnings to accurately reflect the swings in the value of a derivatives portfolio.

The concept of fair value therefore has at its heart the paradigm that valuation should reflect where something should be sold. Now in practice large portfolios are often sold in toto rather than instrument by instrument, so a relevant question for a portfolio which can be sold this way is ‘how would a bidder value it?’ The answer, typically, is that they would take the mid market fair value then apply a spread to the risks (e.g. a vol point or two on each vega bucket), so a reasonable way to establish fair value, often, is to value at mid then take an appropriate bid/offer reserve. (That last part is important – mid alone isn’t where you can get out, so simply valuing at mid is in violation of the accounting standard and if your auditor lets you get away with that, fire them and get one who won’t.)

We now have two problems:

  • How do you establish where mid market is?
  • How do you decide if this paradigm works for your portfolio?

Neither of these are particularly difficult questions. Where there is a liquid market of buyers and sellers, then you use market prices. Where there is a liquid market in related instruments, you use those prices to calibrate an interpolator model. Where there isn’t either of those, then perhaps you can use quotes rather than real trade prices. Or if that fails, you make something up. In the latter two cases, though, you will typically need a valuation adjustment to reflect likely uncertainty in your price. Take your best guess, but then take a reserve to reflect how wrong that guess might be*.

The method will fail if your portfolio is a large part of the market, or would take a long time to liquidate. In this case the principle of valuing where you could close out the book would suggest taking an extra reserve† to reflect the price change you would cause if you tried to sell the whole portfolio. Just because you are buying and selling 1% of the position each day does not mean that the prices those trades happen at are reflective of where you could get out of the entire position.

In the Whale farrago, JP (it seems) neither took a prudent bid/offer valuation adjustment nor took an adjustment to reflect the size of their position. This has nothing to do with derivatives being murky and everything to do with not complying with the basic idea of marking to where you can get out of the portfolio.

The usual line peddled at this point is that none of this would be possible if all derivatives were traded on exchange. That’s false. Many exchanges are replete with illiquid contracts where the last published trade price is not reflective of where the current market would be were anyone to try to trade. (Just trying looking at pretty much any far from the money single stock listed equity option, or any commodity/energy contract away from a few benchmarks.) Exchanges are not a replacement for good product control teams trying, daily, to test prices: indeed, if their prices are used without thought, they can be far more dangerous than letting the traders tell you where the market is, then diligently checking.

Financial instrument valuation involves a lot of grunt work. Multiple data sources, experienced individuals, prudent reserves/valuation adjustments and skepticism are all required to do a good job. That’s true of exchange-traded instruments and OTC ones. The estimation of fair value is an important discipline, but it is vital not to lose sight of the fact that it is, despite all this work, an informed guess. There is no platonic ideal of the right price out there waiting to be discovered, especially not for any really big position whether in securities or derivatives. We can rightly blame JP for not doing a good job in forming its estimate, but we should also understand that perfection is unattainable. If really want to know where you could sell a position in any financial instrument the only way to find out is to sell it.

*You do it that way rather than using a ‘prudent’ (i.e. wrong) mark first because you want the price and its volatility to be the best guess (especially if you are hedging), and second because you want to flag to management and owners the uncertainty in that price.

†One of the many changes in accounting standards that have made things worse in recent years is that these size-based price adjustments are often disallowed in US GAAP. What were the FASB thinking of?

Update. Wot he said, too. Especially the bit about loans. Indeed, this qualifies for quote of the day status:

Compared to, like, banking, JPMorgan’s CIO portfolio was a model of transparent valuation, even with the fraud.

Just because they tell you doesn’t mean it isn’t true July 29, 2013 at 6:02 am

From the FT:

Big US banks are warning that new rules on their funding risk damaging the more-than-$7tn “repo market”, where financial institutions borrow against government bonds, potentially destabilising one of the most important financial markets in the world.

Analysts at JPMorgan Chase estimate that big banks in Europe, Japan and the US would have to hold at least $180bn of additional regulatory capital to cover their borrowings in the repo market under new leverage ratio rules proposed by US and international banking regulators in recent weeks… The ratio proposed by the Basel Committee does not allow banks to “net”, or offset, their various repo trades against one another.

European banks would be less scary if they were in the US February 22, 2013 at 6:36 am

Matt Levine is wrong. He has a post U.S. Banks Would Look Scarier If They Were European Banks which is entirely the wrong way around. Yes, if US banks reported under IAS their balance sheets would be bigger, but that’s because IAS is wrong and US GAAP right. The real problem is HSBC, Deutsche and Co.’s balance sheets are swollen by the utterly idiotic accounting for derivatives recently imposed by the IASB. Fortunately ways around that are being developed, as we will hopefully see shortly.

Financial statements and the death of Osama January 31, 2013 at 5:06 am

Or, why Baudrillard can teach us more about accounting than Descartes

The Epicurean Dealmaker confuses us thus:

accounting is … an epiphenomenon to the actual day-to-day activities which any business conducts. It is a way to keep track of the financial outcomes of a firm’s true activity, which is conducting business.

Why this isn’t false, per se, it is deeply unhelpful. That is because it suggests that there is somehow an underlying `real’ business which `real’ managers, the kind who wouldn’t know a statement of cash flows if they were threatened by one in Central Park, someone live and breathe.

Things are rather different. First, as Baudrillard taught us, the way to think of meaning is not as imbuing some ineffable connection to the real, but rather as a web of reference. In other words, there’s no real business, there is just a collection of actual and conceivable management reports. Management can’t manage `the real business’ because there is no such thing. They can and do manage (and become seduced by*) signs of business. That’s all.

In this light, the current criticism of bank accounting (see, for instance, Partnoy and Eisinger) can be read as an objection to its relationship to other accounts. DVA in the P/L means that US GAAP is substantially different from management accounting, for instance. What the readers of financial statements want to see is something that is closer to not the truth but rather to the narrative that managers have constructed to give themselves comfort. They want an albeit redacted and dramatised precis of what really happened in the situation room, not Brave Zero Thirty.

*Doesn’t this account of seduction remind you of the games management play?

Baudrillard’s concept of seduction is idiosyncratic and involves games with signs which set up seduction as an aristocratic “order of sign and ritual” in contrast to the bourgeois ideal of production, while advocating artifice, appearance, play, and challenge against the deadly serious labor of production. Baudrillard interprets seduction primarily as a ritual and game with its own rules, charms, snares, and lures.

Too much to say today January 3, 2013 at 2:56 pm

Like the pent-up demand released in yesterday’s equity rally, there has clearly been a lot of good writing going on over the holidays that is coming out. There is too much to comment on in detail today, so here are some highlights:

  • Lisa Pollack in FT Alphaville on reconciling between US GAAP and IFRS presentations of derivatives. The bottom line is that accounting standards are a mess, but at least footnotes will now provide some reconciliation between the differing approaches. If you level the differences, the top of the list of banks by size of derivatives balance sheet is JPM, BAC, BNP, Barclays, DB, RBS, SG, UBS, CS, GS, MS. Is anyone else as surprised by how high the French are in this list as I am?
  • Lisa again on the slightly troubling lack of clarity on who collateral posted by ERISA pension funds at FCMs should be returned to in the event of bankruptcy. Ooops.
  • A good, long read from Frank Partnoy and Jesse Eisinger in Atlantic magazine on bank disclosures. Favourite quote: “There is no major financial institution today whose financial statements provide a meaningful clue about its risks”.
  • Coppola comment on a BIS paper on safe assets (which I have yet to read, but will get to). I agree (and have been pushing) the idea that we consider the perspective whereby “the purpose of government debt is not to fund government spending. It is to provide safe assets.” (HT to Izzy for picking this one up.)
  • Ross Gittins in the Sydney Morning Herald on the gangs which run America.

Valuing non-traded derivatives January 2, 2013 at 2:55 pm

There has been further kerfuffle over Deutsche’s handling of gap options in leveraged supersenior trades. For instance, the FT reports the remarks of a couple of accounting professors. Charles Mulford says

“I believe that the gap risk should have been adjusted to market value – consistent with the views of the former employees,” adding: “One cannot mark-to-market the upside but not the downside.”

While Edward Ketz, according to the FT,

said that in an illiquid market accounting rules still applied and if Deutsche could not determine a market price it should have taken a conservative view and discounted the value of the trade.
“The whole idea of lack of liquidity and lack of knowing what’s out there means the fair value becomes much smaller,” he said.

Leaving aside for a second the fact that few bank CFOs would give a darn what accounting professors think about valuation (for the very reasonable reason that accountants who know their OIS discounting from their DVA are rarer than hen’s teeth), these comments represent a fundamental mis-reading of the valuation process for non-traded instruments.

What is really going on is:

  • Absent a market, you have to value financial instruments using a model.
  • There is almost always a choice of models.
  • The calibration of the model – and indeed its calibratability – matters as much as the mathematics of the model itself.
  • Some models are clearly bad choices when applied to some products as they do not capture essential features of the product.
  • Some calibrations of sensible models are foolish, as they do not reflect where the hedges that will be actually used trade.
  • There is often a wide choice of sensible models with sensible calibrations. There is usually no best choice, and no unambiguous `market value’.
  • Different choices give different valuations.
  • Different quants will have different views on what is `best’. Smart derivatives traders are skeptical of the efficacy of any particular model when applied to non-traded products.
  • You will only know if the model and calibration choice you made was sensible after the product has matured and you have examined whether the hedge strategy you used captured the value that the model said was there.
  • Sometimes it is better not to model a feature using implied parameters if you do not think that it is hedgeable.
  • Taking P/L from this is aggressive, but not something most auditors would have the guts to object to.
  • Deutsche is probably fine, but if you want to know more, you should read Matt Levine.

Capital doesn’t matter if valuations are wrong November 15, 2012 at 7:08 am

Bloomberg has an interesting story on the Danish FSA’s pursuit of mis-stated financials at banks:

Denmark’s financial regulator is warning the country’s banks that an understatement of lending risks won’t be tolerated as it embarks on a hunt to catch what it’s dubbed “backdoor” capital dilution.

The Financial Supervisory Authority will review internal rating models that determine how much capital a lender sets aside to ensure banks don’t find a way around stricter standards. While banks may fulfill capital requirements on paper, recent failures suggest risk weights don’t always reflect reality, leaving buffers too small to absorb losses… When Denmark’s housing bubble burst more than four years ago, it revealed widespread capital shortfalls that have since led to the demise of more than a dozen regional lenders. Toender Bank A/S, the most recent insolvency, followed a reported three- fold increase in profit in the first half and a solvency ratio – – a measure of financial strength — of 17.3 percent at the end of June. Yet an inspection last month by the FSA revealed bad loans almost 10 times as big as those reported by the bank, wiping out its equity.

Bloomberg loses a few marks here for not being precise: the capital requirements are fine, but the capital available to meet them is bigger than it should be because the bank has not taken enough provisions and thus has over-stated its earnings. What is interesting is that when the Danish FSA tightened their standards on impairments, impairment charges doubled at one large (and well-regarded) bank, Nordea. Presumably the impact elsewhere was similar. This kind of solid, boring policing of lending is very important, so kudos to the Danish FSA for doing it. One does wonder what Toender’s auditors were doing though…

Enhancing the Risk Disclosures of Banks November 13, 2012 at 5:25 pm

I have been reading a useful and timely report on enhancing bank risk disclosures. Their objectives are sensible, and seven fundamental principles are suggested:

  1. Disclosures should be clear, balanced and understandable.
  2. Disclosures should be comprehensive and include all of the bank’s key activities and risks.
  3. Disclosures should present relevant information.
  4. Disclosures should reflect how the bank manages its risks.
  5. Disclosures should be consistent over time.
  6. Disclosures should be comparable among banks.
  7. Disclosures should be provided on a timely basis.

As many commentators (notably Bloomberg’s Jonathan Weil) have pointed out, we are far from this world right now.

The report goes on to give a lot of reasonable detailed recommendations. This is what they have to say on overall capital requirements disclosures, for instance:

[Banks should] Present a table showing the capital requirements for each method used for calculating RWAs for credit risk, including counterparty credit risk, for each Basel asset class as well as for major portfolios within those classes. For market risk and operational risk, present a table showing the capital requirements for each method used for calculating them. Disclosures should be accompanied by additional information about significant models used, e.g. data periods, downturn parameter thresholds and methodology for calculating loss given default (LGD).

And here is the first of four paragraphs on market risk:

Provide information that facilitates users’ understanding of the linkages between line items in the balance sheet and the income statement with positions included in the traded market risk disclosures (using the bank’s primary risk management measures such as Value at Risk (VaR)) and non-traded market risk disclosures such as risk factor sensitivities, economic value and earningsscenarios and/or sensitivities.

It sounds elementary – of course you would want that – but it is a measure of how far banks’ disclosures fail to meet the standard of `what a reasonable person trying to understand the firm would ask’ that I cannot think of a single large bank today that meets that requirement. There is a lot of information in annual reports and Basel pillar 3 documents, but there is a lot that is missing too. These recommendations are a very good step towards filling in the gaps.

Banks will of course push back on this. The last thing that most of them want is (in the words of paragraph 26) to `provide information that facilitates users’ understanding of the bank’s credit risk profile, including any significant credit risk concentrations’. That is short sighted: investors would trust banks more if they could understand them. The reason that many trade below book is their opacity, and enhanced disclosure is the only solution to that.

Dexia – another awkward result November 8, 2012 at 11:05 am

In July 2011:

Dexia was subject to the 2011 EU-wide stress test conducted by the European Banking Authority… As a result of the assumed shock, the estimated consolidated Core Tier 1 capital ratio of Dexia would change to 10.4% under the adverse scenario in 2012… the results determine that Dexia meets the capital benchmark set out for the purpose of the stress test.

And today, according to Bloomberg:

Belgium and France, wrestling for more than a year over the second rescue of Dexia, agreed on a 5.5 billion-euro ($7 billion) recapitalization of the bank.

As Jonathan Weil said last year, you can’t believe anything about regulatory capital benchmarks, in Europe or elsewhere, stressed or not. That’s because the capital ratio only makes sense if you believe the banks’ valuations and loan loss provisions are correct – and Dexia demonstrates vividly (as Wachovia and Lehman and Wamu and so many others did during the crisis) that they can be materially wrong for years.

Show no mercy, Lord Vader October 20, 2012 at 4:54 pm

Star Wars Episode 3 is being repeated and, as usual, I have some sympathy with Vader the falling hero. As I hinted yesterday, I have rather less for Manmohan Singh’s recent article in the FT*. Singh is smart and he has produced some insightful work on OTC derivatives, but his most recent suggestion definitely comes from the dark side. His first error is to assume that financial statements tell him much about derivatives:

each of the large banks active in the OTC derivatives market in recent years carries an average of $100bn of derivative-related tail risk; that is, the potential cost to the financial system from its collapse after all possible allowable “netting” has been done within the bank’s derivatives book and after subtracting any collateral posted on the contracts. Past research finds that the 10-15 largest players in the OTC derivatives market may have about $1.5tn in under-collateralised derivatives liabilities, a cost taxpayers may have to bear unless some solution to the “too-big-to-fail” question can be profferred

Sadly we know nothing of the kind, because derivatives assets and liabilities on IFRS financial statements tell us nothing about net risk. IFRS grosses up: FAS doesn’t. While it is suggested that footnotes reconcile the differences, the fact remains that Singh’s statement is simply wrong about the exposures.

Next, Singh’s `$1.5tn in under-collateralised derivatives liabilities’. Interdealer exposure is typically fully collateralised; it is corporate and sovereign exposure that isn’t, and this is usually an asset not a liability because the dealer usually lends the corporate money. Does Singh know the difference?

Limits or taxes on non-deposit liabilities are interesting, although they suffer from the problems that if successful they would act as dramatic credit contraction and if unsuccessful they would transfer large amounts of counterparty credit risk to the shadow banking system. (This is not an either/or choice – you can have the worst of those two worlds.) But Singh’s suggestion of a tax on `derivatives liabilities’ is ignorant of the (inane) accounting, macroeconomically unhelpful, and prejudicial to a class of transactions that deserves better. Derivatives aren’t transactions whose value depends on other financial variables any more: they are the things that take the blame when better defended transactions (such as mortgages) cause a crisis. As Darth Sidious well knows, having a plausible fall guy is important whatever the facts of the case.

Yes, I know this is behind a paywall. Blame the FT, not me.

Accounting fail of the week September 17, 2012 at 9:43 am

I love demonstrations of how far accounting diverts from reality like this from Jonathan Weil at Bloomberg:

At Hudson City Bancorp Inc. (HCBK), one of New Jersey’s largest lenders, the balance sheet was so detached from reality that the company agreed last month to sell itself to M&T Bank Corp. (MTB) at a 20 percent discount to book value, or assets minus liabilities. Even so, the $3.7 billion sale price was 12 percent more than Hudson City’s stock- market value at the time.

The market thinks the bank is worth $3.3B but doesn’t know, the real price it turns out is $3.7B, but the accountants were perfectly happy with a book value of $4.6B. KPMG are their auditors, not that it matters as any of the big firms would let this happen.

Auditors, nul point August 22, 2012 at 6:28 am

From Dealbook:

Having completed the first review of brokerage firm audits, the Public Company Accounting Oversight Board said on Monday that it had found deficiencies in every audit its inspectors reviewed.

Go and read the whole article; it is hilarious (unless you have an unquestioning belief in the veracity of audited financial statements).

Accounting and earning smoothing June 22, 2012 at 4:31 pm

Some people try to blog about a range of things, and end up doing most of them badly (yes, Felix Salmon, I am looking at you). Jonathan Weil just writes about accounting, but he does it really well. Today’s Bloomberg column is a classic, in which is he points out the absurdity of the accounting rules that allowed JPMorgan to record gains to offset those Whale losses.

The trading loss so far for the second quarter was about $2 billion, before taxes, the company said May 10. However, the [CIO] office had also realized a $1 billion pretax gain from sales of securities this quarter, which JPMorgan said it hadn’t factored into its previously issued earnings forecasts. The obvious impression JPMorgan left was that it had sold securities and booked gains as a way to mitigate the loss.

There’s nothing wrong with that under the accounting rules. The problem is with the rules themselves, which create needless complexity for investors, along with ample opportunities for companies to manipulate the timing and size of their earnings.

I would encourage you to read the rest. It discusses how supervisors were complicit with the accounting standards setters for many years in allowing banks to mask their true earnings volatility, and how things have recently got a little better (although obviously not a lot better, given what has been happening in Spanish banks).

Regulating the whale on alphaville June 15, 2012 at 12:11 pm

A slightly expanded version of this post on the JPMorgan losses, accounting, and CRM models is up on FT alphaville here.

Accounting for credit risk before the crisis – a case of a gateway drug? April 20, 2012 at 8:48 pm

(Crossposted from FT Alphaville.)

“The question is,” said Alice, “whether you can make words mean so many different things.”

In a recent Alphaville post, I made the claim that if the monolines had been required to mark the credit risk that they had taken to market, they would not have played such a prominent role in the financial crisis. Here I want to provide some support for that claim.

There will be several threads to this narrative. We begin with credit spreads.

What’s in a credit spread?

A credit spread is the compensation the taker of credit risk receives for risk. It is well-known that this includes more than just compensation for default risk. Citi research, for instance, produced this illustration recently, showing the default and non-default components of generic BBB credit spreads over time:

Components of the credit spread

They use the term ‘risk premium’ for the non-default component: in reality this component is a mix of compensation for liquidity risk, funding risk, and other factors.

Notice how this non-default component varies over time. What this means is that a holder of credit risk who is marking to market suffers some P&L volatility that is unrelated to default risk (as well as some that is).

The consequences of marking credit to market

If you have to mark a credit risk position to market, then:

  • You have to fund losses caused by credit spread volatility;
  • You have to support the risk of credit spread volatility with some equity; and
  • The risk of the position includes the risk of movements in the non-default component in the credit spread.

A non-mark-to-market holder of the same risk does not have these issues. Depending on their precise accounting standard they may have earnings volatility resulting from changes in perceptions of default, but they won’t have volatility resulting from non-default factors, and thus they don’t need as much equity to support the same position. The need for less equity means that a non-MTM credit risk taker will require a lower return than one who has to mark-to-market.

The history of historic cost

Long ago, the existence of multiple ways of accounting for financial instruments made sense. There were liquid (or at least semi-liquid) securities, and these were marked; there were totally illiquid loans, and these were accounted for based on historic cost (with a reserve being taken if the loan was judged to be impaired). Banks had a buy-and-hold strategy in the loan book, so recognising P&L on an historic cost basis made sense, while marking to market was natural for the flow-based trading book. Insurance companies had approaches* that were similar to historic cost: essentially they recognised premiums as they were paid, and reserved for claims that had been incurred but not yet presented.

Over the 1990s, these boundaries became blurred. Credit default swaps and securitisation liquidified banking book credit risk, and some institutions adopted originate-to-distribute strategies, while others were able to take credit risk in unfunded form by writing credit protection.

This meant that an arbitrage became available whereby the same risk could be taken by both mark-to-market players (by buying a trading book security) and non-MTM players (by writing credit protection which did not have to be marked or making a loan).

So how exactly did you take unfunded credit risk without having to mark it?

Several methods were developed to allow insurance companies to take unfunded credit risk without having to mark it to market.

  • In the transformer approach, the insurance company would write a contract of insurance to a SPV which then wrote a CDS. Provided that neither the insurer nor the CDS buyer consolidated the SPV, this provided a compound contract that at one end looked like and was accounted for as insurance, and at the other end looked like and was accounted for as a credit default swap.
  • In the wrap approach, the insurance company provided a financial guarantee contract on a bond. If the guarantor was AAA-rated (which the large monolines were pre-crisis), this essentially split the bond into a funding component provided by the buyer of the wrapped bond and a risk component, provided by the insurer.

Insurance companies took credit risk in other ways, too, of course, including some mark-to-market ones; we will come back to this shortly.

Why was taking credit risk in unfunded non-MTM form attractive to some insurers?

The insurance business model is, roughly: take risk by writing insurance, receive premiums, invest the premiums, and pay claims when presented. It works well when the value of invested premiums is larger than that of the presented claims. Given this model, some insurers found credit risk attractive: due to the non-default components of the credit spread, it seemed as if they could get paid more to take credit risk than defaults would cost them, and the structuring technology described above allowed them to do this without having to worry about intermediate earnings volatility caused by having to mark to market. The only question in this business model was ‘do you expect ultimate default losses to bigger or smaller than the value of invested premiums?’

Insurance risk models vary significantly from firm to firm, but what they share is a desire to estimate the capital required to support the risk of unexpectedly large claims. In other words, they assumed that the key risk was the risk of bigger-than-expected claims; something that is perfectly reasonable given the insurance accounting model.

Credit risk taking, then, was potentially attractive to insurers for three reasons:

  • It could be made to look like a business model they were familiar with (take premiums, invest them, pay claims);
  • It could be accounted for as insurance; and
  • The capital required to support some forms of credit risk taking, such as writing protection on asset backed securities, was rather small according to their models.

Was insurance accounting a gateway drug?

It is certainly not that case that most credit risk taken by insurers pre-crisis was non-MTM. AIG, for instance, used fair value accounting on most of the contracts it wrote. However I believe that the availability of the non-MTM model in the early 2000s acted as a kind of ‘gateway drug’, getting some insurers into credit risk taking. Without it, the capital required to support credit risk taking would have been higher, and thus the business would have seemed less attractive. Moreover, the earnings volatility potentially created by having to mark to market** would have at least have given pause for thought at a much earlier stage.

To be fair to insurance accounting standards setters, it is hard to see what they could have done differently. Financial guarantee accounting makes some kind of sense where the guarantor is writing a wrap on an entire municipal bond, and there are no reasonable proxies available. The transformer structure, where a transaction is accounted for as insurance at one end and marked to market as CDS at the other, is less defensible. Arguably, though, the transformer SPV is a major part of the issue, and the rules governing when such things are consolidated have been tightened up (as have the details of financial guarantee accounting). One does wonder, though, what the relevant supervisors had in mind given their evident comfort with the types of practice described here.

Conservation of P&L volatility

Many laws in physics say that in any interaction, some property such as momentum or charge is conserved: the total amount of it in the system remains the same. In a certain sense, moving credit risk from an MTM to a non-MTM player violates conservation of risk. A non-MTM party sees less risk in the deal than an MTM party as the volatility of the non-default-related component of the spread has disappeared. Early 2000s structures such as the one we have described facilitated this, and thus allowed the non-default component of the spread to be monetized.

The introduction of CVA made this situation somewhat better. Non-MTM parties do not get the full benefit of their accounting if they have to post collateral based on the mark of the position – at very least they have to fund the collateral, and that is a drain on their liquidity. So many of monoline trades were done without collateral agreements. This in turn meant that once CVA charges were imposed, some of the volatility of the non-default component of the credit spread reappeared as CVA volatility. This risk hadn’t disappeared after all. Perhaps that is the real lesson: if your trade seems to make risk disappear, there’s something wrong with it.

*Obviously a one sentence account glosses over many complexities and jurisdictional differences.

**Of course, being able to use a model to mark means that much of this volatility can be avoided, at least while the asset credit protection has been written on is not obviously impaired.

Fair value gains as monetary base – even better than the real thing April 3, 2012 at 6:25 pm

An old speech of Paul Tucker’s, made me think. (Danger, Will Robinson.)

To begin, two unconnected facts.

First, fair value gains are unlike most (not quite all, but run with me) other accounting gains, in that there isn’t necessarily a matching loss. If I issue 100 shares, and you buy 50 for $1, then I sell a further ten for $1.50, you can mark your 50 up to $1.50 without anyone having lost anything. Thus unlike a growth in credit (where there is an obligation to repay and hence a liability matching the asset), fair value gains are asymmetric.

Second, the monetary base is asymmetric; the central bank can create (or destroy) it out of nothing. When the central bank opens the window to repo in assets, it usually creates new money.

Thus in a certain sense, fair value gains are like the monetary base in that they are money with no matching liability. Both are forms of money that banks can use without worrying about liquidity risk. Indeed, fair value gains are in a sense even better than M0 in that audited FV gains count as retained earnings and hence as part of a bank’s capital. They can thus be used to lever broad money (that is, deposits), something central bank liquidity doesn’t do.

This of course means that asset price growth has inherent leverage: buy a share for $1, mark at $1.50, take $0.50 as P/L, count that as capital, use it and borrowed money to buy some more shares.

This leads me to suspect that you can’t really think about money and the Ms without thinking about capital too. Broad money creation – as we said before
here – doesn’t just depend on credit demand, it also requires both funding and capital.

Update. More on the modern view of money multipliers (and that whole Krugman vs. the Minskians thing) can be found here, here and here.

Is creditor discipline of large banks possible? February 20, 2012 at 11:10 am

My main concern about last week’s idea from Interfluidity of government inflation linked savings accounts was the impact it would have on bank funding. Essentially I was worried that fewer deposits at banks meant more repo/CP funding. Now I notice that this is a design rather than a feature:

Frankly, it’s better if more bank funding were “hot” and regulators were frequently on the hook to choose between support or resolution of banks. It’d require regulators to much more actively involve themselves in the asset-side of bank balance sheets. I think it was Minsky who pointed out that, back in the day, monetary policy consisted primarily of discounting against bank loans as collateral, and that fact meant that central banks had a much richer and deeper understanding of the bank activities than in today’s regime, where direct lending to banks is frowned upon. The current regime lets regulators usually feel pure (since they’re not lending to banks directly), and let banks pretend they are private businesses without direct state support, but good feelings among regulators and bankers don’t necessarily serve the public.

(From a comment to the original post.)

Now this is really interesting. Essentially the debate comes down the feasibility of anyone – a supervisor, a senior debt buyer, a (non-insured) depositor – having a good enough understanding of a bank such that they can exert meaningful discipline. If they can then they should, and maybe Basel’s famous pillor 2, market discipline, can do some good. The problem is that I am not sure that this is possible, at least with current disclosures. Could anyone really, given FSA’s resources say, understand a bank like HSBC or RBS or Barclays well enough to make a meaningful credit decision on them? I rather doubt it but I would love to be proven wrong.

The anatomy of a solvency/liquidity spiral December 13, 2011 at 3:58 pm

I’m reading the FSA report into the RBS failure (so you don’t have to, and because I griped about it not yet being out last week, so I can’t really ignore it). I’ll post in coming days on various aspects of this long and juicy document, but for now let me concentrate on what I think is clearly the mechanism by which RBS failed: a solvency/liquidity spiral.

First, some quotes from the report.

RBS did not have a solvency problem.

“Many accounts of the events refer to RBS’s record £40.7bn operating loss for the calendar year 2008. But that loss is not in itself an adequate explanation of failure. Most of it indeed had no impact on standard regulatory measures of solvency:

  • Of the £40.7bn loss, £32.6bn was a write-down of intangible assets, with impairment of goodwill contributing £30.1bn. Such a write-down signals to shareholders that past acquisitions will not deliver future anticipated value. But in itself, it had no impact on total or tier 1 capital resources, from which goodwill had already been deducted.
  • In fact ‘only’ £8.1bn of the £40.7bn (pre-tax) operating loss resulted in a reduction in standard regulatory capital measures.

Given that RBS’s stated total regulatory capital resources had been £68bn at end-2007, and that it raised £12bn in new equity capital in June 2008 (when the rights issue announced in April 2008 was completed), an £8bn loss should have been absorbable.” (Page 38)

RBS had a liquidity problem…

“The immediate driver of RBS’s failure was … a liquidity run (affecting both RBS and many other banks)… it was the unwillingness of wholesale money market providers (e.g. other banks, other financial institutions and major corporates) to meet RBS’s funding needs, as well as to a lesser extent retail depositors, that left it reliant on Bank of England ELA after 7 October 2008.” (Page 43)

“The vulnerabilities created by RBS’s reliance on short-term wholesale funding and by the system-wide deficiencies were moreover exacerbated by the ABN AMRO acquisition” (Page 46)

… which was driven by concerns about its potential insolvency

“Potential insolvency concerns (relating both to RBS and other banks) drove that run.” (Page 43)

In other words, people were not sure RBS was solvent (even though it was)

“In the febrile conditions of autumn 2008, however, uncertainties about the asset quality of major banks and the potential for future losses played an important role in undermining confidence.” (Page 126)

“The inherent complexity of RBS’s financial reporting from end-2007, following the acquisition of ABN AMRO via a complicated consortium structure, also affected market participants’ view of RBS’s exposures.”

“It is clear that RBS’s involvement in certain asset classes (such as structured credit and commercial real estate) left it vulnerable to a loss of market confidence as concerns about the potential for losses on those assets spread.” (Page 135)

A significant factor in this was that RBS was seen to be too optimistic about what its assets were worth

RBS marks vs ABX

“Deloitte, as RBS’s statutory auditor, included in its Audit Summary report to the Group Audit Committee a range of some £686m to £941m of additional mark-to-market losses that could be required on the CDO positions as at end-2007, depending on the valuation approach adopted… a revision of £188m was made to the valuation of these positions and was treated as a pre-acquisition [i.e. pre-ABN acquisition] adjustment. No other adjustment was made.

Deloitte advised the Group Audit Committee in February 2008 that an additional minimum write-down of £200m was required to bring the valuations of super senior CDOs to within the acceptable range calculated by Deloitte… The Board agreed that additional disclosures should be made in the annual report and accounts, but supported the view of RBS’s management that no adjustment should be made to the valuation.” (Page 150)

“those exposures [i.e. CDO positions] became a focus of concerns by market participants and thus played a significant role in undermining confidence in institutions active in these areas… RBS’s relatively high valuations of super senior CDOs were scrutinised by market comment in early 2008, and there was concern among market participants that further write-downs would be needed, at a time when RBS’s low core capital ratio was already a source of market comment.” (Page 151)

To conclude then

Liquidity risk and opaque/inadequate disclosures, which give rise to concerns about possible insolvency, are enough to doom a bank even if it actually remains solvent.

There will (I know, I know) be more on this tomorrow.