Category / Fair Value

Unreliable journalism in commenting on derivatives valuation August 15, 2013 at 10:30 am

Peter Eavis has a Dealbook post which, sadly, reiterates many of the common misunderstandings about derivatives valuation. Let’s start from first principles, and try to understand what is really going on here.

Why are we trying to fair value a derivatives book? First, to create a reliable P/L which accurately reflects the value attributable to security holders. We want the right earnings so that current equity holders are properly compensated for the risk they are taking, for instance. Second, earnings volatility is the paradigmatic definition of risk, so we want earnings to accurately reflect the swings in the value of a derivatives portfolio.

The concept of fair value therefore has at its heart the paradigm that valuation should reflect where something should be sold. Now in practice large portfolios are often sold in toto rather than instrument by instrument, so a relevant question for a portfolio which can be sold this way is ‘how would a bidder value it?’ The answer, typically, is that they would take the mid market fair value then apply a spread to the risks (e.g. a vol point or two on each vega bucket), so a reasonable way to establish fair value, often, is to value at mid then take an appropriate bid/offer reserve. (That last part is important – mid alone isn’t where you can get out, so simply valuing at mid is in violation of the accounting standard and if your auditor lets you get away with that, fire them and get one who won’t.)

We now have two problems:

  • How do you establish where mid market is?
  • How do you decide if this paradigm works for your portfolio?

Neither of these are particularly difficult questions. Where there is a liquid market of buyers and sellers, then you use market prices. Where there is a liquid market in related instruments, you use those prices to calibrate an interpolator model. Where there isn’t either of those, then perhaps you can use quotes rather than real trade prices. Or if that fails, you make something up. In the latter two cases, though, you will typically need a valuation adjustment to reflect likely uncertainty in your price. Take your best guess, but then take a reserve to reflect how wrong that guess might be*.

The method will fail if your portfolio is a large part of the market, or would take a long time to liquidate. In this case the principle of valuing where you could close out the book would suggest taking an extra reserve† to reflect the price change you would cause if you tried to sell the whole portfolio. Just because you are buying and selling 1% of the position each day does not mean that the prices those trades happen at are reflective of where you could get out of the entire position.

In the Whale farrago, JP (it seems) neither took a prudent bid/offer valuation adjustment nor took an adjustment to reflect the size of their position. This has nothing to do with derivatives being murky and everything to do with not complying with the basic idea of marking to where you can get out of the portfolio.

The usual line peddled at this point is that none of this would be possible if all derivatives were traded on exchange. That’s false. Many exchanges are replete with illiquid contracts where the last published trade price is not reflective of where the current market would be were anyone to try to trade. (Just trying looking at pretty much any far from the money single stock listed equity option, or any commodity/energy contract away from a few benchmarks.) Exchanges are not a replacement for good product control teams trying, daily, to test prices: indeed, if their prices are used without thought, they can be far more dangerous than letting the traders tell you where the market is, then diligently checking.

Financial instrument valuation involves a lot of grunt work. Multiple data sources, experienced individuals, prudent reserves/valuation adjustments and skepticism are all required to do a good job. That’s true of exchange-traded instruments and OTC ones. The estimation of fair value is an important discipline, but it is vital not to lose sight of the fact that it is, despite all this work, an informed guess. There is no platonic ideal of the right price out there waiting to be discovered, especially not for any really big position whether in securities or derivatives. We can rightly blame JP for not doing a good job in forming its estimate, but we should also understand that perfection is unattainable. If really want to know where you could sell a position in any financial instrument the only way to find out is to sell it.

*You do it that way rather than using a ‘prudent’ (i.e. wrong) mark first because you want the price and its volatility to be the best guess (especially if you are hedging), and second because you want to flag to management and owners the uncertainty in that price.

†One of the many changes in accounting standards that have made things worse in recent years is that these size-based price adjustments are often disallowed in US GAAP. What were the FASB thinking of?

Update. Wot he said, too. Especially the bit about loans. Indeed, this qualifies for quote of the day status:

Compared to, like, banking, JPMorgan’s CIO portfolio was a model of transparent valuation, even with the fraud.

I believe in netting – mostly December 22, 2011 at 5:45 pm

FT alphaville has a post on derivatives netting, which is mostly reasonable, although it links to a piece by (self-proclaimed?) expert Das, which isn’t.

To begin with, it is important to understand what a properly executed master agreement does. I think of it as glue: it binds up all the contracts between two parties, so that instead of many little contracts, there is one big complicated contract. As a result of this glueing, the parties owe each other whatever the net value of the big contract is. Thus we get two forms of netting: payment netting on everyday cash movements, reducing the number of cashflows between parties; and close out netting, which means that if one of the parties is in contractual default, then only a single net amount is payable.

In jurisdictions where this works (which is most of them – Russia and China being the most prominent examples where it may fail), this means that there is a single claim against the estate of a failed bank (or a single payment to it if the defaulter is in the money on the big contract).

Now, the really delicate thing is how this close out amount is determined. Unsurprisingly, a standard methodology is not imposed as part of the standard master agreement as this agreement has to deal with both bank-to-client relationships, where there are often a small number of derivatives which are easy to value, and bank-to-bank relationships, which may be much more complex. Of course, the vast majority of close-outs are of bank-to-client relationships – and you don’t hear anything about these proceeding without disputes (which they do, all the time).

A big bankruptcy like that of Lehman Brothers generates litigation on pretty much everything. The amounts of money at stake are large enough that it is worth sueing. So people do, on whatever can reasonably be disputed – and often on something things that can’t. Derivatives are part of this, but they are not especially problematic. Indeed, as Kimberly Summe points out, Lehman’s derivatives have received a lot of unnecessary and unwarranted stigma. The unpalateable truth is that it was real estate lending and bonds that broke Lehman, combined with liquidity risk, not swaps.

So far, we have noted that derivatives are not unusual in creating court cases, and that most close outs are simple and effective. But there is an issue that remains: how can it be that reasonable people differ on what the close out amount on a derivatives portfolio is? The answer is that while bankruptcy law usually has a simple idea of what you can claim, determining that amount is not straightforward. Thus for instance in UK law, broadly, if I suffer a loss of £10 because of your bankrupcty, I have a claim of £10 against the estate of the bankrupt. The obvious example is that I have lent the tenner to the bankrupt. But with a derivatives portfolio, what have I lost? Clearly it depends on how much it costs me to close out the risk. I can’t – especially if I want to look good in front of the judge – just use my own valuation: I have to actually go into the market and close out the risk, then add up the cost of doing that. And what I do has to be ‘commercially reasonable’. Thus for instance getting separate bids on the equity, credit and commodity derivatives sub-portfolios might well be commercially reasonable, but doing separate trades on every derivative rather than offering a portfolio of mostly offsetting instruments to the market probably isn’t. (This is a point which Das gets wrong and which lies at the heart of the Nomura vs. Lehman case.)

The problem at the heart of close out, then, is figuring out what value a bank has been deprived of when one of their derivatives counterparties fails. This is often simple, but for a large multi-asset portfolio, it can be both complicated and sufficiently uncertain that it is worth going to court about. The real story isn’t that there is a problem with netting: it is that the valuation of big portfolios of financial instruments is difficult, especially when you have to do it in a crisis.

The one with the CSA in its tail July 14, 2011 at 12:16 pm

FT alphaville askes How much is this plain vanilla derivative in the window?, noting

Banks aren’t marking a [-n uncollateralized] swap to market anymore, but to the model which dictates their internal cost of funds.

Why do we care?

Because it means pricing even the most basic (uncollateralised) swaps is now very complex.

Well, yes and no. A few points:

  • A vanilla derivative is a collateralized one under the standard CSA these days (cash collateral in the same currency, daily MTM, daily margin). Anything else is exotic, because it involves an exotic collateral option.
  • Accounting standards require firms to take account of their own cost of funds in calculating fair value. So using your own cost of funds to discount uncollateralized flows from you to a counterparty is not just standard, it is necessary.
  • It is true that, as the IFR article Alphaville references says, Unsecured trades now present a serious valuation headache. But, um, that’s because they are really hard things to value. ‘Mark to market’ is a chimera here: fair value is the answer, and that is an institution-specific thing because it depends on funding cost.

Why valuation matters more than capital May 8, 2011 at 9:30 am

An article by Raihan Zamil on vox.eu makes – albeit a little unclearly – a point I have emphasised for a long time: valuation matters more than capital. (See also for instance here and here.)

Why?

Well, let’s take a typical bank. Say it has 100 of assets, supported by 90 of liabilities and 10 of equity.

Adding 2 or 3 to the equity is really controversial: asking for a Basel ratio of 12% is a hard sell. (15% is crazy, by the way. Just saying.) So going from 100/90/10 to 100/88/12 is difficult for supervisors.

But what if the assets aren’t really worth 100? If they are only ‘really’ worth 95, then what we really have is 95/90/5, and the ‘true’ Basel ratio is only 5.2%. Then increasing the equity by 2 points would still leave the bank a significant distance from being well capitalised.

Moreover, a 5% difference in asset valuation across something as big and complicated as a bank can easily happen. Ensuring that provisions in the banking book and marks in the trading book are accurate is really, really hard (even if you are not trying to pull the wool over the shareholder’s eyes).

What does this mean for bank supervision? It means that before worrying about capital, supervisors have to put a lot – and I mean a lot – of effort into checking valuation methodologies, both in theory and in practice. To be fair this happens to some extent in most juristictions already, but given how critical it is, and how hard it is to do correctly*, it would be far better to be over- than under-resourced here.

Now, once you are sure that the valuations are reasonable, you can then look at how leveraged the bank is and how quickly it might lose its capital. But you can’t look at that absent confidence in valuations as you basically know nothing about an institition if you don’t know that its valuations have been diligently determined.

Eat Me

* The above might be seen to imply that there is a ‘correct’ value which can be discovered with sufficient diligence for all assets and liabilities. I don’t believe that is true. In many ways the process – the process of testing valuations and reporting uncertainties, of checking methodologies – is more important than the precise answers.

More on why price does not equal value May 5, 2011 at 7:35 am

Doug, in a truly excellent comment on my previous post, says this:

The reason speculative markets tend to price assets and risks correctly is because over time bad speculators make losing trades and have shrinking capital bases, and good speculators grow their capital base.

With the nonlinear assets (credit protection, vol, skew, correlation, carry trades, many quantitative mean reversion strategies, a lot of the volatile energy products) this mechanism breaks down. At any given time some players who are making losing trades with ex-ante negative expectation will have very large capital bases, because they’ve been consistently collecting the small payout and haven’t hit a large asymmetrical loss.

In other words, if you have only unleveraged longs and shorts in the market, and no derivatives assets, price discovery works. But as soon as you have convexity, so for instance people can write puts and pick up premium, then the mechanism starts to break down. Doug goes on:

This is especially true in a world where managers market to raise capital and people base their investment decisions off of short(ish) term track records. In a world where people only managed their own money and grew or shrunk their capital base from the returns on the initial investment, many of these asymmetrical payoff strategies wouldn’t have time to grow that large before collapsing. However the nature of hedge funds creates a world where capital grows super-linearly with returns (growing from the returns on the initial capital, as well as the influx/outflux of funds that a good/bad track record brings).

Thank you Doug; that is a most insightful comment.

Special assets for special people April 19, 2011 at 1:35 pm

FT alphaville has an interesting article on Citi reclassifying $12.7B of assets in the Special Asset Pool from held to maturity to trading. Alphaville quotes the FT proper:

[This] “enables it to take advantage of a recovery in the market for distressed assets and boost capital buffers as Basel III rules are phased in between 2013 and 2019,”… Which is actually a nice way of saying the bank will be able to avoid higher capital charges on the assets.

Well yes. But it is worth knowing why this works.

The key is writedowns. Capital charges were designed for assets that are marked at par. Thus for instance if I have an ABS worth 100, and I have capital of 20%, then I can withstand a price fall to 80 before eroding my capital cushion. But most of the assets in Citi’s pool would not be marked at 100 – more like 40. They are fallen angels. For ABS like this, substantial price increases – 20 points or more – are entirely possible, while price falls are floored at zero. If capital is based on the face value of the asset, then it is penally large for fallen angel assets. (Note that these were transferred into HTM at marks a lot less than par during the crisis and have been held at those marks ever since – that’s what held to maturity means.)

In the trading book however these assets will have a capital requirement that is based on their mark. So yes, by moving them to trading, you take an immediate P/L hit, but you can subsequently benefit from the upside, and you have capital based on their market value.

Accounting for the future March 30, 2011 at 2:28 pm

Deutsche Bank (HT FT Alphaville) have analysed the fraction of bad assets in the Cajas under various definitions of ‘bad':

Caja NPLs

In the Alphaville article this leads to a discussion of the varying amounts of capital the Cajas might need under different scenarios: €15B, €24B, €30B, even €69B. For me though what is striking is the spread of these numbers. Obviously if you give the Cajas €50B and they turn out only to need €15B, you look like a doofus, while giving them €15B for now and hoping that they don’t need more risks having a rolling banking crisis that plays out over three to five years (see ‘Ireland’).

Note that the use of historic cost accounting means that the Caja are solvent under all the possible scenarios until the loans are actually written down, and we know from the example of Japan that that process can be delayed for many years. In other words, part of the reason we don’t know how much capital we need is that we don’t have a precise definition of solvency. The accounting model these banks have means that, at the moment at least, there are literally tens of billions of euros of uncertainty about their solvency. So much for the much vaunted Spanish model.

Update. The Daily Telegraph, commenting on RBS, highlights the same issue:

According to RBS’s latest accounts, which were calculated using IFRS, the bank has tangible shareholder assets of £58bn and core tier one capital of 10.7pc.

Tim Bush … has calculated that under pre-2005 UK GAAP … RBS would have a tangible shareholder assets of £33bn and a core tier one capital of just 6pc.

IFRS … allows banks to disguise the build-up of risks within banks because distressed loans are not reported until they default.

Now I am not sure that this in itself is sufficient reason to move to fair value for the whole of a bank’s balance sheet – that too can be subjective for illiquid assets – but I certainly think that an objective standard with less uncertainty about whether a bank is or is not solvent would be a good idea.

Caja ha ha? February 16, 2011 at 12:08 pm

A reader whose knowledge of Spain and Spanish banking is much greater than mine commented regarding the previous post:

[There is ] a clash of cultures – specifically those of international capital and Spanish regional banking.

The conflict comes into play in the way that equity-for-debt real estate would be dealt with. In the Spanish context of a pre-20th century love of property, the strategy is a no-brainer. The homes they take in lieu of loans have a really low cost of carry (possibly competitive with physical gold) and they are concrete apartments that don’t deteriorate much. And they will sell, eventually. The 30% provision mandated by the BdE should suffice in most cases and would do the trick were Spain still an isolated exotic kind of place.

The capital market people won’t see it that way at all. Aside from the matter of bank accounting standards, they are confusing properties with bad loans. This won’t change.

(I will happily put a link in to your blog, Spanish expert, if you wish.)

I don’t disagree with any of this. The Caja might represent a perfectly reasonable business model. (I have my doubts, but I know far less about the issue than the writer quoted.) But they do not represent a good business model for a modern European bank. Even without factoring in the low ROE of this business model, you can’t fund it without deposits as anything else has too much liquidity risk. But then deposits vs. mortgages is a strategy that has been prone to boom-and-bust cycles over the years. Equity holders ought to hate this kind of play because the earnings are so volatile even under accrual. Perhaps that’s the equity holder’s problem – they have unrealistic expectations of both bank ROE and bank equity risk – but even if the model has a low risk of ultimate insolvency, I still don’t get the joke.

Dynamically wrong? February 15, 2011 at 10:47 am

The Economist discusses the changing reputation of the Spanish regulatory system. Spain has a system of dynamic provisioning which required their banks to put money aside for expected losses before they started to be incurred. In many ways this is the poster child for new style banking book reserving. But did it serve Spanish banks well? The Economists suggests not:

Spain’s provisioning system may have smoothed the impact of the crisis but did not prevent the system from needing to be recapitalised. Countries that have had “mark-to-market” crises were forced to beef up capital more swiftly, which looks like a good thing now that sovereign-debt worries have people concerned about the potential impact of bank bail-outs on the state.

Of course it is not fair to point the finger at dynamic provisions for this: failure to get the bad news out is a feature of accrual accounting generally, and Spain’s system is better at this than most accrual approaches. But it is fair to point out that a larger proportion of the Spanish bank’s balance sheets are accrual than in many other countries, so in some sense Spain is a test case for modern accrual methods. For me, the issue boils down to confidence. If investors can get comfortable that the provisions are adequate, then confidence is restored, and accrual does not cause systemic risk. But if they can’t, then properly applied fair value may be better. What would the cajas look like on that basis?

Valuation ranges August 26, 2010 at 6:15 am

This blog has consistently emphasised (OK, consistently bored the pants off its readers by emphasising) the importance of valuation ranges for financial instruments. For many such things, the idea of a single correct fair value is a mirage. Instead, it is more helpful to think of a range of values which might be correct.

Steadily both accounting standards and regulation has been coming around to our way of thinking (something for which we certainly claim no credit). There is a particularly clear example in the latest FSA discussion paper:

In April 2008, the Bank of England’s Financial Stability Report analysed the range of values produced by six Large Complex Financial Institutions (LCFIs) at the end of 2007 for super-senior tranches of Collateralised Debt Obligations (CDOs).

These tranches were the most senior slice of CDO structures and would therefore be expected to have a AAA credit rating at inception. The chart below shows the maximum capital requirement for such a position relative to the valuation range. In all cases, the maximum capital requirement is smaller than the variation in valuations (highest valuation minus lowest valuation reported) of the tranches produced across the six firms.

Valuation ranges

Now, it is important to understand what is claimed here. These ranges are not for the same security. One cannot infer that Bank A had a partcular mezz supersenior at 90 and Bank B had the same tranche valued at 45. All that one can infer is that Bank A had one mezz supersenior security valued 55 points over Bank B’s value for an entirely different mezz supersenior tranche. Thus much of the range reflects differences in CDO composition. Quality matters in a crisis.

The capital requirement is also misleading in that if the piece is written down from par, that loss reduces capital, so the effective capital taken is the capital requirement plus the writedown. (There is amusingly pious language elsewhere from FSA about trying to ensure that capital requirements are never more than 100% of notional – something they have thus far failed to do in various places in the capital rules.)

FSA legerdemain (which for me weakens their argument considerably) aside, there is a real point here. There are some securities whose fair value cannot always be determined accurately. It is likely that different banks will have different valuations for securities like this. Moreover, in some extreme cases, the range of values can be a significant fraction of the effective capital requirement. That’s fine, but it needs to be understood by readers of financial statements.

Stressed Ben May 6, 2010 at 1:30 pm

From Ben Bernanke’s speech, The Supervisory Capital Assessment Program–One Year Later:

Importantly, the concerns about banking institutions arose not only because market participants expected steep losses on banking assets, but also because the range of uncertainty surrounding estimated loss rates, and thus future earnings, was exceptionally wide. The stress assessment was designed both to ensure that banks would have enough capital in the face of potentially large losses and to reduce the uncertainty about potential losses and earnings prospects.

The premise here is I think entirely accurate: it was not just current losses that were spooking investors during the Crunch, it was uncertainty over how large future losses would turn out to be. I’m not sure the FED’s stress assessment did that much – the capital and liquidity injections were much more important – but still, the phrasing is interesting. (Remember that the stress tests were not that stressful.)

Later in the speech, Ben makes another interesting point:

Importantly, to conduct effective stress tests, banks need to have systems that can quickly and accurately assess their risks under alternative scenarios. During the SCAP, we found considerable differences last year across firms in their ability to do that. It is essential that every complex firm be able to evaluate its firmwide exposures in a timely way. One of the benefits of the stress testing methodology is that it provides a check on the quality of firms’ information systems.

As I discussed, one reason for the success of the stress tests was the public disclosure of the results. We are evaluating the lessons of the experience for our disclosure policies.

Clearly there is the potential for disclosures here to be really insightful for investors. We have seen how useless VAR disclosures were for predicting losses during the Crunch: perhaps stress tests results, especially if standardised across the industry and thus directly comparable, will be more useful. It certainly can’t hurt (well, it can’t hurt unless an actual loss appears in a situation which is close to one of the ones tested, and it is much bigger than that test would have indicated). Stress tests are here to say, and financial institutions will need to get used to them; to resource themselves so that they can run them easily; and to prepare for the consequences of disclosing the results of them.

Valuation uncertainty and leverage April 13, 2010 at 6:06 am

I like Steve Randy Waldman so I don’t want to cricitise him too much, but I think he makes an error in the following:

On September 10, 2008, Lehman reported 11% “tier one” capital and very conservative “net leverage“. On September 15, 2008, Lehman declared bankruptcy. Despite reported shareholder’s equity of $28.4B just prior to the bankruptcy, the net worth of the holding company in liquidation is estimated to be anywhere from negative $20B to $130B, implying a swing in value of between $50B and $160B. That is shocking. For an industrial firm, one expects liquidation value to be much less than “going concern” value, because fixed capital intended for a particular production process cannot easily be repurposed and has to be taken apart and sold for scrap. But the assets of a financial holding company are business units and financial positions, which can be sold if they are have value. Yes, liquidation hits intangible “franchise” value and reputation, but those assets are mostly excluded from bank balance sheets, and they are certainly excluded from “tier one” capital calculations. The orderly liquidation of a well-capitalized financial holding company ought to yield something close to tangible net worth, which for Lehman would have been about $24B.

What’s wrong? I suspect at least the following:

  • First, the costs of bankruptcy are considerable. The Enron liquidation, for instance, involved fees of more than $600M, and Lehman is a lot more complicated than Enron. Therefore we can chalk up at least a couple of billion to bankruptcy costs, and probably more.
  • In bankruptcy you are a known, forced seller (and terminator of derivatives contracts). The Lehman bankruptcy happened in a crisis – indeed in some ways it caused it. This meant that Lehman’s assets were liquidated under the worst possible conditions. The fact that they were sold for less than their holding value is unsurprising. A 20% discount to sell an illiquid asset in hurry would not be surprising – and Lehman had at least $300B of illiquid assets. So perhaps $60B here.
  • More to the point, while Lehman sailed fairly close to the wind on its valuations, what it did not do – what few firms do – was be honest about the uncertainty in those valuations. If you read the detail of the valuation section of the Valukas report, you will find that a lot of the time, the correct value of assets is simply impossible to determine. What Lehman did was not perhaps conservative, but it was not illegally aggressive according to Valukas. Given Lehman’s assets, a 5% uncertainty in valuation is not surprising. That’s another $15B.

The real point is leverage. If you have (in round numbers) $30B of capital supporting $600B of assets, then $30B of uncertainty in valuation wipes you out. If you were half as leveraged, you could tolerate twice as much uncertainty. No financial will ever be liquidated for anything close to its accounting value, particularly in a crisis. But if firms are less leveraged, then they are more likely to have higher recoveries. Given Lehman’s leverage, going from a going concern value of +$30B to a bankruptcy value of -$50B is not at all surprising.

The simplest risk management error December 19, 2009 at 6:13 am

Financial risk is the risk of loss. That implies that you know the current value of your portfolio. After all, saying that you might lose $10M from market moves is not that helpful if the portfolio is already worth $20M less than you think it is. Valuation, then, is absolutely fundamental to financial risk management. It is also very difficult to get right: checking the valuation of every instrument in even a moderate sized portfolio is difficult, especially if there are OTC derivatives or illiquid securities in it. So I suppose it is no real surprise that firms continue to the numbers wrong. But getting the process wrong – failing to a complete methodology for checking the valuation of the portfolio – that is fairly shocking.

It happens, though. From the Guardian:

The London branch of Toronto-Dominion Bank has been fined £7m by the Financial Services Authority for repeatedly breaching the rules governing the pricing of financial products…

The FSA found that the bank – one of the largest in Canada – had repeatedly failed to follow established procedures in ensuring that a proprietary trader’s books were independently verified, and did not have adequate controls in place that could have detected the pricing issues.

Something for you to do April 6, 2009 at 8:16 am

Willem Buiter, reneging on his earlier negativity on the IASB, quotes from a statement made on April 2, 2009 by the Trustees of the International Accounting Standards Committee Foundation:

Sir David Tweedie, Chairman of the IASB, reported to the Trustees that at their joint meeting last week the IASB and FASB agreed to undertake an accelerated project to replace their existing financial instruments standards (IAS 39 Financial Instruments, in the case of the IASB) with a common standard that would address issues arising from the financial crisis in a comprehensive manner. Though the IASB is consulting on FASB amendments related to impairments and fair value measurement, the Trustees supported the IASB’s desire to prioritise the comprehensive project rather than making further piecemeal adjustments.

This is good. They are not being rushed into anything, and they are not following the FASB in giving in to the banks. However it does make it vital that the IASB gets sufficient informed comment on fair value during its consultative process. I would encourage anyone who cares about these issues to visit the IASB page here, download the consultative document, and comment on it.

Wasteful Timmy March 25, 2009 at 8:41 am

The Geithner plan, understandably, has generated many column inches since it was unveiled on Monday. There is little consensus among the commentariat, but the markets have taken it well. What should we take from Timmy’s last (or at best next to last) stand?

First, it might actually work either by accident – because we are through the worst anyway and it doesn’t hurt – or by design. It is certainly positive in the short term for the shareholders of American banks. And it betokens a reluctance to nationalise which, while negative for the taxpayer, is the kind of thing markets like.

Second, it is clearly an ineffective use of money. The government is providing nearly all the cash. If the same amount had been spent on recapitalising the banks, then there would be more leverage and hence more assets controlled for a dollar saved. Taxpayers should be outraged by this.

Third, it indicates that Geithner believes that an interestingly modern form of systemic risk is important: the risk that quasi-forced sales by one institution causes losses at others via mark to market. This plan achieves a de facto recapitalisation (albeit wastefully) via the ability for all banks to mark their assets to the purchase price in the plan. This means of course that the plan managers will be strongly encouraged to pay more than the market price for the assets: something they can afford to do given the government subsidy built into the structure.

In summary, then, the plan is far from optimal, but it will probably help a bit. The concern is that it won’t be enough. If that happens, then Timmy will need a new job.

Update. Felix Salmon picks up an interesting quote from Sheila Bair. This makes it clear that the intent of the plan is to crush the non-default component of the credit spread:

They [the prices assets are bought into the plan] will still be, they will be market prices. We’re just trying to tease out the liquidity premium. What’s weighing on market prices right now is that people can’t get financing to buy assets, they can’t get financing to buy assets not many people want to buy, you don’t want to buy. And then you have to hold on to them forever because there’s nobody to sell them to. So, that’s — by providing that liquidity that’s lacking now, we’re hoping to get the prices up to what would really be a true market level.

They are doing this by removing all the risk – funding risk, liquidity risk, and credit spread volatility risk. It’s an awfully expensive way to recapitalise the banks.

80% off February 8, 2009 at 12:18 pm

No, not the closing down sale at one of Britain’s many bankrupt retailers, although it could be. Rather it is the fall in property prices from the peak in one of the exurbs of Fort Myers, Florida. The NYT story is here. But mull on that number for a second. 80%. Then consider putting -0.8 in the HPI vector, and think what that will do for the price of even prime RMBS.

Something you already knew February 6, 2009 at 7:10 am

This week’s no shit Sherlock award goes to… Treasury Overpaid For TARP Investments. What is more interesting, though, is utterly spurious precision being treated as fact. Yes, I am sure Treasury overpaid, but saying things like Treasury paid $254 billion for assets worth approximately $176 billion implies that with a bit of due diligence, we could find the ‘right’ value. And we really can’t.

Valuation uncertainty February 5, 2009 at 6:28 am

A great data point from S&P via the New York Times:

The wild variations on the value of many bad bank assets can be seen by looking at one mortgage-backed bond recently analyzed by a division of Standard & Poor’s, the credit rating agency.

The financial institution that owns the bond calculates the value at 97 cents on the dollar, or a mere 3 percent loss. But S.& P. estimates it is worth 87 cents, based on the current loan-default rate, and could be worth 53 cents under a bleaker situation that contemplates a doubling of defaults. But even that might be optimistic, because the bond traded recently for just 38 cents on the dollar, reflecting the even gloomier outlook of investors.

Or as a friend of mine put it, `if you wanna throw the dart at the board and give me an HPI vector, I can tell you what the bond is worth. But who the hell knows what’s the right HPI?’ Given that future house price inflation cannot be known today, he has a point.

The Bull must die October 27, 2008 at 5:29 pm

BullFrom the FED, Information on Principal Accounts of Maiden Lane LLC as at Wednesday, Oct 22, 2008

Net portfolio holdings of Maiden Lane LLC: $26,802M

Outstanding principal amount of loan extended by the Federal Reserve Bank of New York: $28,820M

So the FED is a couple of billion underwater. On October 16th, the assets were valued at $29,492M.

Swaps spreads and other lunch toppings October 26, 2008 at 10:34 am

Why, sometimes I’ve believed as many as six impossible things before breakfast said Alice. This quotation came to mind in the discussion of the 30y dollar swap spread in the FT recently:

“Negative swap spreads have been considered by many to be a mathematical impossibility, just like negative probabilities or negative interest rates,” said Fidelio Tata, head of interest rate derivatives strategy at RBS Greenwich Capital Markets.

Oh dear me. A mathematical impossibility is 2 and 2 adding to 5, or the sudden discovery of a third square root of 4. A physical impossibility is something that we think is impossible according to our current understanding of science: accelerating from rest to go faster than the speed of light, say.

Negative swap spreads are neither of those. They simply represent an arbitrage. An arbitrage is when you can make free money without taking risk. Ignoring for a moment the risk de nos jours – counterparty risk – swap spreads allow one to lock in a positive P/L if one can fund at Libor flat. Free lunches do not often exist in finance, but they do happen in particular when there are no arbitrageurs left standing. No arbitrage relies not on the theoretical possibility of a free lunch, but on enough people actually wanting to dine for nothing that prices move to stop the feast. At the moment there is such a shortage of risk capital that one can indeed find free food. So `impossible’ things are happening not just before breakfast but all through the day. Bon appetit.