Prof. Jayanth R. Varma's Financial Markets Blog

Photograph About
Prof. Jayanth R. Varma's Financial Markets Blog, A Blog on Financial Markets and Their Regulation

© Prof. Jayanth R. Varma

Subscribe to a feed
RSS Feed
Atom Feed
RSS Feed (Comments)

Follow on:

Sun Mon Tue Wed Thu Fri Sat

Powered by Blosxom

Fri, 28 Oct 2011

Does it make sense to hedge DVA?

This blog post is not about whether CVA/DVA accounting makes sense or not; it is only about whether it makes sense to hedge the DVA. Modern accounting standards require derivatives and many other financial assets and liabilities to be stated at fair value. Fair value must take into account all characteristics of the instrument including the risk of non performance (default). CVA and DVA arise out of this fair value accounting.

CVA or Credit Value Adjustment accounts for the potential loss that the reporting entity would incur to replace the existing derivative contract in the event of the counterparty’s default (less any recovery received from the defaulting counterparty). It obviously depends on the probability of the counterparty defaulting and on the recovery in the event of default. More importantly, it also depends on the expected positive value of the derivative at the point of default – if the entity owes money to the counterparty (instead of the other way around), the counterparty’s default does not cause any loss.

DVA or Debit Value Adjustment is the other side of the same coin. It accounts for the possibility that the reporting entity itself could default. One could think of it as the CVA that the entity’s counterparty would need to make to account for the default of the reporting entity. It accounts for the potential loss that the counterparty would incur to replace the existing derivative contract in the event of a default by the reporting entity (less any recovery received from the reporting entity). It can also be thought of as the notional gain to the reporting entity from not paying off its liability in full. The DVA depends on the probability of the reporting entity defaulting, the recovery in the event of default, and the expected negative value of the derivative at the point of default.

DVA can also be applied to any liabilities of the reporting entity that are accounted at fair value and not merely to derivatives, but the logic is the same.

The application of CVA and DVA in valuing assets and liabilities on the balance sheet is perhaps the only logical way of applying fair value accounting of assets and liabilities in which non performance risk is material. But the accounting standard setters took another fateful and controversial decision when they mandated that changes in CVA and DVA be included in the income statement instead of letting it go straight to the balance sheet as a part of Other Comprehensive Income. As I said at the beginning, this blog post is not about the merits of this accounting treatment; I mention the accounting rules only because these rules create the motivation for the management of financial firms to try and hedge the CVA and DVA.

Hedging the CVA is relatively less problematic as it only increases the resilience of the firm under conditions of systemic financial stress. Counterparty defaults are somewhat less threatening to the solvency of the entity when there are hedges in place even if there could be some doubts about whether the hedges themselves would pay off when the financial world is collapsing. What I find difficult to understand is the hedging of the DVA.

The DVA itself is a form of natural hedge in that it produces profits in bad times. It is when things are going wrong and the world is worried about the solvency of the reporting entity that the DVA changes produce profits. One could argue that the profits are notional, but there is no question that the profits arise at the point in time when they are most useful. Hedging the DVA would imply that during these bad times, the (possibly notional) DVA profits would be offset by real cash losses on the hedges. A position that produces losses in bad times is not a good idea. Such positions have to be tolerated when they are intrinsic to the business model of the entity. What baffles me is why anybody would willingly create such wrong way risks purely to hedge an accounting adjustment.

The Modigliani Miller argument in capital structure theory (home made leverage) can be extended to hedging decisions (home made hedging) to say that hedging is irrelevant except when it solves a capital market imperfection. Bankruptcy costs are a major capital market imperfection that can make it advantageous to undertake hedging activities that reduce the chance of bankruptcy. In this framework, the only hedges that make sense are the ones that hedge large solvency threatening risks. The DVA hedge is the exact opposite. It produces large cash losses precisely at the point of maximum distress. For example, this Wall Street Journal story says that Goldman Sachs implements a DVA hedge by selling credit default swaps on a range of financial firms. The trouble with this is that these hedges will produce large cash losses when many other financial firms are all in trouble, and this is likely to coincide with troubles at Goldman Sachs itself. Far from mitigating bankruptcy risks, the hedges would exacerbate them.

The only way this makes sense is if investment banks think that losses during systemic crises can be pushed on to the taxpayer. If this assumption is correct, then DVA hedges work wonderfully to socialize losses and privatize gains!

Posted at 21:55 on Fri, 28 Oct 2011     View/Post Comments (3)     permanent link

Sun, 16 Oct 2011

St. Petersburg, Menger and slippery infinities

In the twentieth century, St. Petersburg became Petrograd, then Leningrad and finally went back to being St. Petersburg. The St. Petersburg paradox named after this city also seems to have been running around in circles during the last three centuries. The latest round in this long standing paradox has been initiated by a mathematics professor who is coincidentally named Peters. Way back in 1934, Menger proved that a generalized version of the St. Petersburg paradox invalidates all unbounded utility functions. Prominent economists like Arrow and Samuelson have accepted this conclusion. In a paper entitled Menger 1934 revisited, Peters argues that Menger made an error that has remained undiscovered during the last 77 years.

The original version of the St. Petersburg game involved a fair coin being tossed until the first time a head appears. If this happens at the n'th toss of the coin, the payoff of the game is 2n–1. The probability of this event is 2-n and therefore this event contributes 2n–1 2-n = 1/2 to the expected payoff of the game. Summing over all n yields 1/2 + 1/2 + ... and the expected payoff from the game is therefore infinite.

Bernoulli’s 1738 paper from which the paradox obtained its name argued that nobody would pay an infinite price for the privilege of playing this game. He proposed that instead of expected monetary value, one must use expected utility. If utility of wealth is logarithmic in wealth, then the expected utility from playing the game is not only finite, but is also quite small.

Menger’s contribution was to consider a Super St. Petersburg game in which the payoff was not 2n–1 but exp(2n–1). Essentially, taking logarithms of this payoff to compute utility yields something similar to the payoff of the original St. Petersburg game, and the offending infinity reappears. Menger’s solution to this generalized paradox was to require that utility functions must be bounded. In this case, there is no monetary payoff that yields very high utilities like 2n–1 for sufficiently large n.

Peters argues that there is an error in Menger’s argument. The logarithmic function diverges at both ends — for large x, ln(x) goes to infinity, but for small x (approaching zero), ln(x) goes to minus infinity. Suppose a player pays a large price (close to his current wealth) for playing the Super St. Petersburg game. Now if heads comes quickly, the players’s wealth will be nearly zero and the utility would approach minus infinity. The crux of the Peters’ paper is the assertion: “Menger’s game produces a case of competing infinities. ... the diverging expectation value of the utility change resulting from the payout is dominated by the negatively diverging utility change from the purchase of the ticket.” Therefore, the ticket price that a person would pay for being allowed to play this game is finite.

I agree with Peters that even for the Super St. Petersburg game, a person would pay only a finite ticket price if the utility function is logarithmic or is of any other type that has a subsistence threshold below which there is infinite disutility. It appears to me however that a slight reformulation reintroduces the paradox. If we do not ask what ticket price a person would pay, but what sure reward a person would forego in order to play this game, the infinite disutility of the ticket price is kept out of the picture, and the infinite utility of the payoff remains. In other words, the certainty equivalent of the Super St. Petersburg game is infinite. Peters is right that a person with logarithmic utility would not pay a trillion dollars to play the game, but Menger is right that such a person would prefer playing the Super St. Petersburg game to receiving a sure reward of a trillion dollars. Peters’ contribution is to make us recognize that these are two very different questions when there is a “competing infinity” at the other end to contend with. But Menger is right that if you really want to exorcise this paradox, you must rule out the diverging positive infinity by insisting that utility functions should be bounded.

Peters also makes a very different argument by bringing the time dimension into play. He argues that the way to deal with the paradox is to use the Kelly criterion which brings us back to logarithmic functions. Peters relates this to the distinction between time averages and ensemble averages in physics. I think this argument goes nowhere. We can collapse the time dimension completely by changing the probability mechanism from repeated coin tossing to the choice of a single random number between zero and one. The first head in the coin toss can be replaced by the first one in the binary representation of the random number from the unit interval. Choosing one random number is a single event and there is no time to average over. The coin tossing mechanism is a red herring because it is only one way to generate the required sample space.

Of course, there are other solutions to the paradox. You can throw utility functions into the trash can and embrace prospect theory. You can correct for counterparty risk (Credit Value Adjustment or CVA in modern Wall Street jargon). You can argue that such games do not and cannot exist in a market, and financial economics need not price non existent instruments.

I am quite confident that three hundred years from today, people will still be debating the St. Petersburg paradox and gaining new insights from this simple game.

Posted at 08:57 on Sun, 16 Oct 2011     View/Post Comments (0)     permanent link

Thu, 13 Oct 2011

Is there a two tier inter bank market in India?

Update October 13, 2011:

After I posted this yesterday, the RBI published the results of the Reverse Repo auction yesterday showing that there was no money parked with the RBI yesterday. Possibly, the top tier banks are also now cash deficit in the aggregate, and they do not have any surplus to deposit with RBI. Or perhaps, the two tier market is de-tiering. I do not know.

Original post (October 12, 2011):

In a well functioning inter bank market, cash surplus banks lend to cash deficit banks and only the aggregate cash surplus or deficit of the banking system is absorbed by the central bank’s liquidity operations (repo or reverse repo). In a two tier market, there is a top tier of healthy banks that lend to and borrow from each other, but this tier refuses to lend to the second tier of banks whose financial health is suspect. In such a market, if the top tier banks in the aggregate have a cash surplus, they would not lend it to the second tier banks, and would instead park the surplus with the central bank. If the second tier banks have a cash deficit, they would be borrowing from the central bank because they are unable to borrow from anybody else. The central bank would thus be partially supplanting the inter bank market. A two tier market is of course better than a complete seizure of the inter bank market where there is no inter bank market at all and all cash surpluses are parked with the central bank which on-lends it to the deficit banks. After 2008, this progression from a normal inter bank market to a non existent one is well known and understood.

What I am worried about is whether there is a two tier inter bank market in India today. Since the end of last month, we have been seeing the odd situation of some banks parking cash with the RBI at 7.25% while other banks are borrowing from the RBI at 8.25%. If there is no tiering of the banking system, this does not make sense. The surplus bank could lend to the deficit bank at 7.75% and both banks would be better off. The surplus bank would earn ½% more than what the RBI pays, while the deficit bank would reduce its borrowing cost by ½%. That this is not happening suggests that the surplus bank does not have confidence in the solvency of the deficit banks and prefers a safe deposit with RBI. Put differently, there are some banks who are able to borrow only from the central bank as other banks are unwilling to lend to them.

When I started observing this phenomenon at the end of September, my first reaction was that it was due to the distortions caused by the half yearly closing on September 30. When it lasted beyond that, I thought that this was just the effect of the holiday season (Durga Puja and Dussehra). But all that is now over and still the phenomenon persists. Are some bankers worried about the solvency of their fellow bankers?

Posted at 10:41 on Thu, 13 Oct 2011     View/Post Comments (11)     permanent link

Wed, 05 Oct 2011

Basel III: The German (or rather Sinn) Finish

I have blogged about the Swiss Finish and the British Finish that add (or threaten to add) large layers of capital requirements for banks on top of the Basel III minimum. Now, one of Germany’s most influential economists, Hans-Werner Sinn, has come out with proposals that are equally far reaching. My impression is that the German political establishment has been opposed to higher capital requirements, but this could change if the peripheral sovereign crisis necessitates a large bail out of German banks. So Sinn’s proposals are interesting:

After the Basel III system for bank regulation, a Basel IV system is needed in which the risk weights for sovereign debt are to be raised from zero to the level for mid-sized companies.

Common equity (core capital plus balance-sheet ratio) is to be increased by 50% with respect to Basel III.

Sinn does not elaborate on these points which come at the fag end of a long list of (highly controversial) recommendations on how to rescue the euro. There is therefore some ambiguity about what exactly he means. Basel III demands common equity of 4.5% plus a capital conservation buffer of 2.5% plus an extra capital requirement of up to 2.5% for Global Systemically Important Banks (G-SIBs) plus a counter cyclical buffer of up to 2.5%. This leaves us with a range of 7% to 12%. If we take the mid point of 9.5% (for example, a big G-SIB at a point in the business cycle where the counter cyclical buffer is zero) and apply a 50% increase to this, we end up at 14.25%. Since Basel III also requires non equity capital of 3.5%, the total capital requirement would be 17.75%. This is a little below the Swiss and British finish in the aggregate, but it has more of higher quality (equity) capital.

Sinn’s proposal for increasing risk weights is also effectively an increase in bank capital requirements. I am quite in agreement with the idea that we should not distinguish between sovereign exposures and corporate exposures when it comes to risk weights. Other classes of assets with low risk weights (for example, exposures to central counter parties) also need to be revisited. Sinn’s proposal attacks the risk weight problem in another way by applying a 50% increase to the balance sheet leverage ratio which essentially measures the ratio of capital to unweighted assets. Basel III requires a minimum leverage ratio of 3% (assets can be 33 times capital); if this ratio is pushed up to 4.5%, assets will be limited to 22 times capital. For the leverage ratio, Basel III uses tier one capital; it is not clear whether Sinn wants this to be entirely in the form of equity capital.

Basel III was in some ways a victory for the big global banks (though they are still trying to water it down to whatever extent they can), but it appears to me that the real battle lies beyond Basel III. And perhaps, the banks are gradually losing this battle. So many different groups of people coming at it from different perspectives are ending up with very similar banker-unfriendly numbers on minimum bank capital.

Posted at 14:17 on Wed, 05 Oct 2011     View/Post Comments (4)     permanent link