Prof. Jayanth R. Varma's Financial Markets Blog

Photograph About
Prof. Jayanth R. Varma's Financial Markets Blog, A Blog on Financial Markets and Their Regulation. This blog is currently suspended.

© Prof. Jayanth R. Varma

Subscribe to a feed
RSS Feed
Atom Feed
RSS Feed (Comments)

Follow on:

Sun Mon Tue Wed Thu Fri Sat

Powered by Blosxom

Wed, 18 Dec 2013

Clearing of OTC Derivatives

Dr. David Murphy of the Deus ex Machiatto blog has published a comprehensive book on clearing of OTC derivatives (OTC Derivatives, Bilateral Trading and Central Clearing, Palgrave Macmillan, 2013). I was surprised that the author information on the book cover flap does not mention the blog at all but gives prominence to his having been head of risk at ISDA. Had I found this book at a book shop, the ISDA connection might have made me less likely to buy the book because of the obvious bias that the position entails. This was a book that I read only because of my respect for the blogger. Many publishers have obviously not received the memo on how the internet changes everything.

The book presents a balanced discussion of most issues while of course leaning towards the ISDA view of things. Many of the arguments in the book against the clearing mandate would be familiar to those who read the Streetwise Professor blog. Yet, I found the book quite informative and enjoyable.

In Figure 10.1 (page 261), Murphy summarizes the winners and losers from the clearing reforms. To summarize that highly interesting summary:

Obviously, the clearing mandate has not quite worked out the way its advocates expected. Clearing was originally expected to lead to greater competition and reduce the dominance of the big (G14) dealers. Murphy explains that the big dealers will actually benefit from the mandate as they can more easily cope with the compliance costs.

I am not disturbed to find corporate end users listed as losers. If Too Big to Fail (TBTF) banks were being subsidized by the taxpayer to write complex customized derivatives, these products would clearly have been under priced and over produced. When the subsidy is removed, supply will drop and prices will rise. This is a feature and not a bug.

If the price rises sufficiently, end users may shift to more standardized and simpler products. Of course, this will imply basis risks because the hedge no longer matches the exposure exactly. This matters less than one might think. The Modigliani Miller (MM) argument applied to hedging (which is actually very similar to a capital structure decision) implies that most hedging decisions are irrelevant. The only relevant hedging decisions are the ones that involve risks large enough to threaten bankruptcy or financial distress and therefore invalidate the MM assumptions. Basis risks are small enough to allow the MM arguments to be applied. Inability to hedge them has zero real costs for the corporate end user and for society as a whole.

One could visualize many ways in which the market may evolve:

  1. The reforms could lead to the futurization of OTC derivatives. That might be the best possible outcome – exchange trading has even more social benefits than clearing in terms of transparency and competition. The increased basis risk is a non issue because of the MM argument.
  2. Another possible outcome could be a reduction in end user hedging and consequently a smaller derivatives market. Under the MM assumptions, this need not be problematic either.
  3. The worst possible outcome would be an OTC market that is even more concentrated (G10 or even G5) and that uses clearing services provided by badly managed CCPs. This would be a nightmare scenario with a horrendous tail risk.

Posted at 16:21 on Wed, 18 Dec 2013     View/Post Comments (0)     permanent link

Tue, 17 Dec 2013

Electronic Trading

There was a time not so long ago when equities traded on electronic exchanges and everything else traded on OTC markets. We used to hear people argue vehemently that electronic trading would not work outside of the equities world. The belief was that the central order book could not handle large trade sizes. Algorithmic and high frequency trading changed all that. We learned that large trades could be sliced and diced into smaller orders that the central order book could handle easily. With exchanges offering lower and lower latency trading, a big order could be broken into pieces and fully executed faster than a block trade could be worked out upstairs in the old style.

Slowly the new paradigm is expanding into new asset classes. The 2013 triennial survey shows that electronic trading has virtually taken over spot foreign exchange trading and is dominant in other parts of the foreign exchange market as well (Dagfinn Rime and Andreas Schrimpf, “The anatomy of the global FX market through the lens of the 2013 Triennial Survey”, BIS Quarterly Review, December 2013). The foreign exchange market has ceased to be an inter bank market with hedge funds and other non bank financial entities becoming the biggest players in the market. As Rime and Schrimp explain:

Technological change has increased the connectivity of participants, bringing down search costs. A new form of “hot potato” trading has emerged where dealers no longer play an exclusive role.

The next battle ground is corporate bonds. Post Dodd Frank, the traditional market makers are less willing to provide liquidity and people are looking for alternatives including the previously maligned electronic trading idea. McKinsey and Greenwich Associates have produced a report on Corporate Bond E-Trading which discusses the emerging trends but is pessimistic about equities style electronic trading. I am not so pessimistic because in my view if you can get hedge funds and HFTs to trade something, then it will do fine on a central order book.

Posted at 18:03 on Tue, 17 Dec 2013     View/Post Comments (1)     permanent link

Fri, 13 Dec 2013

The Pharoah's funeral plans

Post crisis, there has been a lot of interest in ensuring that large banks prepare a living will or funeral plan describing how they will be resolved if they fail. Though this does look like a good idea, I think there is a catch which is best illustrated with an example.

Some of the most successful funeral plans in history were of the Egyptian Pharoahs who began their reigns with the construction of the pyramids in which they were to be entombed. If anything, this would have increased the cost of the funeral. It is very likely that left to their successors, the pyramids would have been less grandiose. Moreover, to a finance person it is obvious that the present value of the cost increased because the pyramids were built earlier than required.

Much the same thing may be true of the banks as well. Funeral plans may make regulators complacent about excessively large and complex banks; as a results, the costs would be higher when they do fail. Banks may also incur a lot of wasteful expenditure to prepare and defend funeral plans that may ultimately prove useless. The existence of these plans may lead to delays in taking prompt and corrective action for vulnerable banks. Why shut down a bank now when you believe (perhaps wrongly) that it can be shut down later without great difficulty? In short, the plans may allow mere words to substitute for real action.

Posted at 15:54 on Fri, 13 Dec 2013     View/Post Comments (1)     permanent link

Fri, 29 Nov 2013

Revisiting Fischer Black's deathbed paper

In 1995, Fischer Black submitted a paper on “Interest rates as options” when he was terminally ill with cancer. While publishing the paper (Journal of Finance, 1995, 50(5), 1371-1376), the Journal noted:

Note from the Managing Editor: Fischer Black submitted this paper on May 1, 1995. His submission letter stated: “I would like to publish this, though I may not be around to make any changes the referee may suggest. If I’m not, and if it seems roughly acceptable, could you publish it as is with a note explaining the circumstances?” Fischer received a revise and resubmit letter on May 22 with a detailed referee’s report. He worked on the paper during the Summer and had started to think about how to address the comments of the referee. He died on August 31 without completing the revision.

The paper contained an interesting idea to deal with the problem of negative interest rates – assume that the true or ‘shadow short rate’ can be negative, but the rate that we do observe is never negative because currency provides an option to earn a zero interest rate instead. Viewed this way, the interest rate can itself be viewed as an option (with a strike price of zero). What Black found attractive about this idea was that it made modelling easy: one could for example assume that the shadow rate follows a normal (Gaussian) distribution. Whenever the Gaussian distribution produces a negative interest rate, we simply replace it by zero. We do not need to assume a log normal or square root process just to avoid negative interest rates.

While interesting in theory, the model did not prove very popular in practice. But five years of zero interest rates in the US has changed this. Neither the lognormal nor the square root process can easily yield a persistent zero interest rate. Black’s shadow rate achieves this in a very easy and natural manner. More than the finance community, it the macroeconomics world that has rediscovered Black’s model. For example, Wu and Xia have a paper in which they show that macroeconomic models perform nicely even at the zero lower bound (ZLB) if the actual short rate is replaced by the shadow rate (h/t Econbrowser). The shadow rate has the same correlations with other macroeconomic variables at the ZLB as the actual rate has during normal times.

As I have mentioned previously on this blog, modelling interest rate risk at the ZLB is problematic and different clearing corporations have taken different approaches to the problem. Maybe, they should take Black’s shadow short rate more seriously.

Posted at 11:39 on Fri, 29 Nov 2013     View/Post Comments (0)     permanent link

Fri, 15 Nov 2013

Rakoff on financial crisis prosecutions

Judge Rakoff who is best known for rejecting SEC settlements against Bank of America and Citigroup for not going far enough, has come out with a devastating critique of the US failure to prosecute high level executives for frauds related to the financial crisis.

Rakoff points out that the frauds of the 1970s, 1980s and 1990s all resulted in successful prosecutions of even the highest level figures.

In striking contrast with these past prosecutions, not a single high level executive has been successfully prosecuted in connection with the recent financial crisis, and given the fact that most of the relevant criminal provisions are governed by a five-year statute of limitations, it appears very likely that none will be.

First of all, Rakoff dismisses the legal difficulties in prosecuting crisis crimes:

Rakoff thinks that there are three reasons why there have been no prosecutions:

  1. Prosecutors have other priorities –
    • the FBI’s resources were diverted to fighting terrorism;
    • the SEC was focused on Ponzi schemes and accounting frauds;
    • the Department of Justice was bogged down with the prosecution of insider trading based on the Rajaratnam tapes
  2. “[T]he Government’s own involvement in the underlying circumstances that led to the financial crisis ... [and] in the aftermath of the financial crisis ... would give a prudent prosecutor pause in deciding whether to indict a C.E.O. who might, with some justice, claim that he was only doing what he fairly believed the Government wanted him to do.”
  3. “The shift that has occurred over the past 30 years or more from focusing on prosecuting high-level individuals to focusing on prosecuting companies and other institutions.”

Rakoff is known for his strong views on the last point and he lays out the case brilliantly:

If you are a prosecutor attempting to discover the individuals responsible for an apparent financial fraud, you go about your business in much the same way you go after mobsters or drug kingpins: you start at the bottom and, over many months or years, slowly work your way up. Specifically, you start by “flipping” some lower or mid-level participant in the fraud ... With his help, and aided by the substantial prison penalties now available in white collar cases, you go up the ladder. ...

But if your priority is prosecuting the company, a different scenario takes place. Early in the investigation, you invite in counsel to the company and explain to him or her why you suspect fraud. He or she responds by assuring you that the company wants to cooperate and do the right thing, and to that end the company has hired a former Assistant U.S. Attorney, now a partner at a respected law firm, to do an internal investigation. ... Six months later the company’s counsel returns, with a detailed report showing that mistakes were made but that the company is now intent on correcting them. You and the company then agree that the company will enter into a deferred prosecution agreement that couples some immediate fines with the imposition of expensive but internal prophylactic measures. For all practical purposes the case is now over. You are happy ...; the company is happy ...; and perhaps the happiest of all are the executives, or former executives, who actually committed the underlying misconduct, for they are left untouched.

I suggest that this is not the best way to proceed. Although it is supposedly justified in terms of preventing future crimes, I suggest that the future deterrent value of successfully prosecuting individuals far outweighs the prophylactic benefits of imposing internal compliance measures that are often little more than window-dressing. Just going after the company is also both technically and morally suspect. It is technically suspect because, under the law, you should not indict or threaten to indict a company unless you can prove beyond a reasonable doubt that some managerial agent of the company committed the alleged crime; and if you can prove that, why not indict the manager? And from a moral standpoint, punishing a company and its many innocent employees and shareholders for the crimes committed by some unprosecuted individuals seems contrary to elementary notions of moral responsibility.

Rakoff concludes with a scathing criticism:

So you don’t go after the companies, at least not criminally, because they are too big to jail; and you don’t go after the individuals, because that would involve the kind of years-long investigations that you no longer have the experience or the resources to pursue.

After the series of frauds in the late 1990s and early 2000s in the US (Enron, Worldcom, Tyco and Adelphia), Europe (Lernout and Hauspie, Vivendi, ABB and KirchMedia) and India (Tata Finance), I wrote that: “ The US has shown that it can prosecute and punish wrong doers far more speedily than most other jurisdictions.”. I am not at all sure about this today.

Posted at 21:34 on Fri, 15 Nov 2013     View/Post Comments (1)     permanent link

Fri, 08 Nov 2013

Equity markets are different

Equity markets (specifically the market for large capitalization stocks) seem to be very different from other markets in that they are the only markets that are unconditionally liquid. The Basel Committee has officially recognized this – in their classification of 24 markets by liquidity horizons, the large cap equity market is the only market in the most liquid bucket. (Basel Committee on Banking Supervision, Fundamental review of the trading book: A revised market risk framework, Second Consultative Document, October 2013, Table 2, page 16)

There is abundant anecdotal evidence for the greater liquidity of large cap equity markets in stressed conditions – you may not like the price but you would not have any occasion to complain about the volume. For example, in India when the fraud in Satyam was revealed, the price of the stock dropped dramatically, but the market remained very liquid. In fact, the liquidity of the stock on that day was far greater than normal. During the global financial crisis, stock markets remained very liquid while liquidity in many other markets dried up. During the 2008 crisis, Societe General could unwind Kerviel’s unauthorized equity derivative position of € 50 billion in just two days.

There could be many reasons why large cap equity markets are indeed different:

At least some of these features can be replicated in other markets, and such replication should perhaps be a design goal.

Posted at 21:45 on Fri, 08 Nov 2013     View/Post Comments (0)     permanent link

Wed, 06 Nov 2013

Gorton defends opacity of the plutocrats

Gary Gorton has published a paper on “The development of opacity in U.S. banking” (NBER working paper 19540, October 2013). He writes that before the US Civil War:

... bank note markets functioned as “efficient” markets; the discounts were informative about bank risk. Banks at the same location competed, and the note market enforced common fundamental risk at these banks.

Then bank notes were replaced by checking accounts, the banks were taken over by rich men who kept the price per share high enough to keep it out of reach of most investors thereby effectively closing down the market for their stocks. Simultaneously,the clearing houses brought about a culture of secrecy so that depositors also knew little about the health of individual banks.

Gorton thinks that this shutdown of informative and efficient markets was a great thing for economic efficiency – a claim that I find difficult to believe.

On the other hand, the endogenous opacity that Gorton describes is completely analogous to the conclusion of another recent paper (“Shining a Light on the Mysteries of State: The Origins of Fiscal Transparency in Western Europe” by Timothy C. Irwin, IMF Working Paper, WP/13/219, October 2013) on the opacity of sovereign finances:

When power has been tightly held by a financially self-sufficient king, much information about government, including government finances, has remained secret. When power has been shared, either in democracies or sufficiently broad oligarchies, information on government finances has tended to become public.

Posted at 15:20 on Wed, 06 Nov 2013     View/Post Comments (0)     permanent link

Sun, 27 Oct 2013

Greenspan: successful policy will always create a bubble

In an interview with Gillian Tett in the Financial Times of October 25, 2013 (behind paywall), Alan Greenspan says:

Beware of success in policy. A stable, moderately growing, non-inflationary environment will create a bubble 100 per cent of the time.

The first objection to this argument is that a bubble is by definition unstable and so the term “stable” should be changed to “apparently stable”. That apart, Greenspan seems to be making inferences from just one event – the Great Moderation. From a sample size of one, inferences can be drawn in many directions, and many permutations and combinations are possible. Some possible variants are:

Finally, not many would agree with Alan Greenspan’s self serving claim that bubble blowing can be regarded as a successful policy.

Posted at 14:28 on Sun, 27 Oct 2013     View/Post Comments (0)     permanent link

Sun, 20 Oct 2013

SEC order explains Knight Capital systems failure

More than a year ago, Knight Capital suffered a loss of nearly half a billion dollars and needed to sell itself after a defective software resulted in nearly $7 billion of wrong trades. A few days back, the US SEC issued an order against Knight Capital that described exactly what happened:

It appears to me that there were three failures:

  1. It could be argued that the first failure occurred in 2003 when Knight chose to let executable code lie dormant in the system after it was no longer needed. I would like such code to be commented out or disabled (through a conditional compilation flag) in the source code itself.
  2. I think the biggest failure was in 2005. While making changes to the cumulative order routine, Knight did not subject the Power Peg code to the full panoply of regression tests. Testing should be mandatory for any code that is left in the system even if it is in disuse.
  3. The third and perhaps least egregious failure was in 2012 when Knight did not have a second technician review the deployment of the RLP code. Furthermore, Knight did not have written procedures that required such a review.

I am thus in complete agreement with the SEC’s observation that:

Knight also violated the requirements of Rule 15c3-5(b) because Knight did not have technology governance controls and supervisory procedures sufficient to ensure the orderly deployment of new code or to prevent the activation of code no longer intended for use in Knight’s current operations but left on its servers that were accessing the market; and Knight did not have controls and supervisory procedures reasonably designed to guide employees’ responses to significant technological and compliance incidents; (para 9 D)

However, the SEC adopted Rule 15c3-5 only in November 2010. The two biggest failures occurred prior to this rule. Perhaps, the SEC found it awkward to levy a $12 million file for the failure of a technician to copy a file correctly to one out of eight servers. The SEC tries to get around this problem by providing a long litany of other alleged risk management failures at Knight many of which do not stand up under serious scrutiny.

For example, the SEC says: “Knight had a number of controls in place prior to the point that orders reached SMARS ... However, Knight did not have adequate controls in SMARS to prevent the entry of erroneous orders.” In well designed code, it is good practice to have a number of “asserts” that ensure that inputs are not logically inconsistent (for example, that price and quantity are not negative or that an order date is not in the future). But a piece of code that is called only from other code would not normally implement control checks.

For example, an authentication routine might verify a customer’s password (and other token in case of two factor authentication). Is every routine in the code required to check the password again before it does its work? This is surely absurd.

Posted at 21:45 on Sun, 20 Oct 2013     View/Post Comments (0)     permanent link

Wed, 16 Oct 2013

When solvent sovereigns default (aka technical default)

As the US approaches the deadline for resolving its debt ceiling stalemate, there has been much talk about the consequences of a “technical default”. Across the Curve has an acerbic comment about the utter inappropriateness of this terminology:

I guess a technical default is one in which you personally do not own any maturing debt or hold a coupon due an interest payment. If you hold one of those instruments it is a real default!

It is more appropriate to talk about defaults by a solvent sovereign where the ability of the sovereign to repay remains high even after the promise of timely repayment has been broken. This kind of default used to be pretty common in the past (till about a century ago). In the old days, defaults of this kind arose due to liquidity problems or due to some kind of fiscal dysfunction. However strange the US situation may look to us on the basis of our experience in recent decades, it is not at all unusual in the broad sweep of history.

Phillip II of Spain defaulted four times during his reign. Spain was a superpower when it defaulted and it remained a superpower after its initial couple of defaults. In a fascinating paper, Drelichman and Voth explain:

The king’s repeated bankruptcies were not signs of insolvency ... future primary surpluses were sufficient to repay Philip II’s debts ... In addition, lending was profitable ... (Drelichman and Voth (2011), “Lending to the Borrower from Hell: Debt and Default in the Age of Philip II”, The Economic Journal, 121, 1205-1227)

As long as Spain owned the largest silver mines in the world, its abilty to repay debts was not seriously in question (even when the debts reached 60% of GDP under Philip II). One can see a close parallel with the very high ability of the US to repay its debts if its politicians choose to do so.

In England, the default of Charles II (the notorious stop of the exchequer) was a result of fiscal dysfunction rather than any inability of England to repay its modest debts. Charles was not on the best of terms with his parliament and therefore could not levy new taxes to finance his expenses. The same parliament was of course willing to levy far greater taxes and support far greater debts to finance the wars of a monarch more to its liking (William of Orange) after the bloodless revolution. This episode also seems to have much in common with modern day US politics.

Another interesting phenomenon which appears counter intuitive to many people is that sovereign default often happens under very strong and competent rulers. If we look at England, Edward III, Henry VIII and Charles II were among its greatest kings. (In the case of Henry, I am counting the great debasement as a default. In the case of Charles, the chartering of the Royal Society cements his place as one of that country’s greatest monarchs in my view.). Turning to the US, one of its outstanding presidents (Franklin Roosevelt) presided over that country’s only default so far (the repudiation of the gold clause). Perhaps, only a strong ruler is confident enough to risk all the consequences of default. Lesser rulers prefer to muddle along rather than force the issue.

On another note, it may be that we are entering a new age where in at least some rich countries, sovereign default will no longer be as much of a taboo as it is today. Default may indeed be the least unpleasant of all choices that await a rich, over indebted and ageing society, but only truly heroic leaders may be willing to take the plunge.

Posted at 15:47 on Wed, 16 Oct 2013     View/Post Comments (1)     permanent link

Sun, 06 Oct 2013

Fama French and Momentum Factors: Data Library for Indian Market

My colleagues, Prof. Sobhesh K. Agarwalla, Prof. Joshy Jacob and I have created a publicly available data library providing the Fama-French and momentum factor returns for the Indian equity market using data from CMIE Prowess. We plan to keep updating the data on a regular basis. Because of data limitations, currently the data library starts in January 1993, but we are trying to extend it backward.

We differ from the previous studies in several significant ways. First, we cover a greater number of firms relative to the existing studies. Second, we exclude illiquid firms to ensure that the portfolios are investable. Third, we have classified firms into small and big using more appropriate cut-off considering the distribution of firm size. Fourth, as there are several instances of vanishing of public companies in India, we have computed the returns with a correction for survival bias.

The methodology is described in more detail in our Working Paper (also available at SSRN): Sobhesh K. Agarwalla, Joshy Jacob & Jayanth R. Varma (2013) “Four factor model in Indian equities market”, W.P. No. 2013-09-05, Indian Institute of Management, Ahmedabad.

Posted at 18:05 on Sun, 06 Oct 2013     View/Post Comments (0)     permanent link

Sun, 29 Sep 2013

33 ways to control algorithmic trading

Earlier this month, the US Commodity Futures Trading Commission (CFTC) published a Concept Release on Risk Controls and System Safeguards for Automated Trading Environments. It seeks comments on a laundry list of 33 measures that could be adopted to control algorithmic and high frequency trading (I arrived at this count from the list on page 116-132, counting sub items in the first column also).

The proposals on this list range from the sensible to the problematic, and there does not seem to be much of an effort to analyse the economic consequences of these measures. The idea of the concept release appears to be to outsource this analysis to those who choose to submit comments on the concept release. There is nothing wrong with that. But with the current CFTC Chairman, Gary Gensler, set to step down soon, nothing much might come out of the concept release.

Posted at 12:32 on Sun, 29 Sep 2013     View/Post Comments (0)     permanent link

Wed, 25 Sep 2013

Systemic effects of the Merton model

David Merkel has posted on his The Aleph Blog a note that he wrote in 2004 about how widespread use of the Merton model to evaluate credit risk influences the corporate bond market itself. The Merton model regards risky debt as a combination of risk free debt and a short put option on the assets of the issuer. Credit risk assessment is then a question of valuing this put option – a process that relies largely on stock prices and implied volatilities. Merkel writes:

Over the last seven years, more and more managers of corporate credit risk use contingent claims models. Some use them exclusively, others use them in tandem with traditional models. They have a big enough influence on the corporate bond market that they often drive the level of spreads. Because of this, the decline in implied volatility for the indices and individual companies has been a major factor in the spread compression that has gone on. I would say that the decline in implied volatility, and deleveraging, has had a larger impact than improving profitability on spreads.

The Merton model is probably under-utilized in India and so I have not encountered this problem. But Merkel is saying that in some countries, it is over used and over reliance on it can be a problem. The global financial crisis highlighted the dangers of outsourcing credit evaluation to the rating agencies. The Merton model in some ways amounts to outsourcing credit evaluation to the equity markets, and this too could end badly. I have wondered for some time now as to why advanced country central banks act as if they have adopted equity price targeting. If the Merton model is so influential, then the primary channel of monetary transmission to the credit markets would lie via equity markets and targeting equity prices suddenly makes a lot of sense to the central banks themselves.

But those who buy poor credit risks on the basis of Merton model credit assessments that have been flattered by QE inflated stock prices (and QE dampened volatilities) might be in for a rude surprise if and when the central banks decide to let equity markets find their natural level and volatility.

Posted at 13:41 on Wed, 25 Sep 2013     View/Post Comments (0)     permanent link

Sun, 15 Sep 2013

Snowden disclosures and the cryptographic foundations of modern finance

I have always believed that the greatest tail risk in finance is a threat to its cryptographic foundations. Everything in modern finance is an electronic book entry that could suddenly evaporate if the cryptography protecting it could be subverted. Such a cryptographic catastrophe would make the Lehman bankruptcy five years ago look like a picnic.

Global finance should therefore be alarmed by the Snowden disclosures earlier this month that the large technology companies have been collaborating with the US government to actively subvert internet encryption. It is claimed that backdoors have been built into many commercial encryption software and that even the standards relating to encryption have been compromised.

I do not think this is about the US at all. It is very likely that large technology companies are extending similar cooperation to other governments that control large markets. A decade ago, Microsoft publicly announced that it had provided the Chinese government access to the Windows source code. Blackberry’s long resistance to the Indian government’s desire for access to its encryption suggest that the Indian market is not large enough to induce quick cooperation, but I would be surprised if the US and China were the only countries that are able to bend the large technology companies to their ends. Countries like Russia and Israel with proven cyber warfare capabilities would also have achieved some measure of success.

In this situation, financial firms around the world should consider themselves as potential targets of cyber warfare. Alternatively, they could just become collateral damage in the struggle between two or more cyber superpowers. In my view, this is an existential threat to the modern financial system.

The saving grace is that there is nothing to suggest that the mathematics of encryption has become less reliable. The problems are all in the implementation – commercial routers, commercial operating systems, commercial browsers and commercial encryption software may have been compromised but not the mathematics of encryption, at least not yet.

Perhaps, finance can still escape a cryptographic meltdown if it embraces open source software for all cryptography critical applications. As computer security expert Bruce Schneier explains: “Trust the math. Encryption is your friend. Use it well, and do your best to ensure that nothing can compromise it.”

Posted at 16:45 on Sun, 15 Sep 2013     View/Post Comments (1)     permanent link

Sun, 01 Sep 2013

Krugman on Asian Crisis as success story

Paul Krugman says so partly tongue in cheek, but still it is remarkable to read this from the world’s foremost authority on the Asian Crisis:

I will say, 15 years ago it would never have occurred to me that we would be looking back at Asia’s crisis as a success story.

My last blog post on the good that came out of the Asian Crisis looks a little less outrageous now. Also while I gave Malaysia of 1997-98 as an example of a bad response to a crisis, Krugman points to peripheral Europe. Now that is a truly atrocious response to a crisis – one in which the creditors are still in charge and are still thinking like creditors.

Posted at 11:03 on Sun, 01 Sep 2013     View/Post Comments (0)     permanent link

Thu, 29 Aug 2013

Why India's crisis could be a good thing

I recall telling some Indian policy makers in the late 1990s that it was unfortunate that India had not fallen victim to the Asian Crisis. I need hardly add that the rest of the conversation was not very pleasant. However, one of the great privileges of living in a democracy is that one get away with saying such things – policy makers do not have firing squads at their disposal (at least not yet).

Now we seem to be getting a crisis of the kind which I have been expecting for several months now (see my blog posts here, here and here). This is a good time to reflect on the aftermath of the Asian Crisis to understand how (under the right conditions) a lot of good can come out of our crisis.

Like in East Asia of 1997, the Indian corporate sector has come to be dominated by a rent seeking kleptocracy that resembles the Russian oligarchs. Unlike the businesses that came to prominence in the first decade after the 1991 reforms, many of the business group that have emerged in the last decade have been tainted by all kinds of unsavoury conduct. For the country to reestablish itself on the path of high growth and economic transformation, many of these unproductive businesses have to be swept away. In 1997, the bankruptcy of the Daewoo Group was important in reforming the Chaebol and getting Korea back on the track again. We need to see something similar happen in India. A useful analogy is that of a forest fire that clears all the deadwood and allows fresh shoots to grow and rejuvenate the forest.

One of the wonderful things about a financial crisis is that the capital allocation function shifts decisively from those who think like short term lenders to those who think like owners. In a debt restructuring for example, erstwhile lenders are forced to think like equity holders, and they end up allocating capital much better that they did when they were just chasing yields while floating on the high tide of liquidity. They have to stop worrying about sunk costs and focus more on future prospects.

A very good example is what the Asian Crisis did to Samsung. At the time of the crisis, Samsung was an also-ran Chaebol whose head was obsessed with building a car business like Daewoo or Hyundai. In the consumer electronics business, it was well behind Sony. The crisis forced Samsung to abandon its car making dreams under enormous pressure from the financial markets. As it focused on what it knew better, Samsung has created a world beating business while Sony ensconced in its cosy world in a country which largely escaped the Asian Crisis has simply gone downhill.

Even at the level of countries, one can see how a country like Malaysia that changed least in response to the crisis has been in relative decline as compared to its peers. I cannot help speculating that in the emerging crisis, China’s large reserves will allow that country the luxury of behaving like the Malaysia of 1997. If by chance, India responds like the Korea of 1997, Asia’s economic landscape in the next decade will be very interesting.

Another interesting parallel is that in 1997/1998, several of the crisis affected countries faced elections at the height of the crisis or had a change of government by other means (Indonesia). Far from leading to political confusion, these elections helped to legitimize decisive action at the political level. Nothing concentrates a politician’s mind more than a bankrupt treasury. We saw that in 1991 (another case of an election at the time of crisis). We could see that once again in 2014.

Of course, nothing is preordained. We can blow our chances. But to those who think that 1991 was the best thing that ever happened to this country, there is at last reason to hope that we will get another 1991. In these bleak times, all that one can do is to be optimistic in a pragmatic way.

Posted at 21:49 on Thu, 29 Aug 2013     View/Post Comments (5)     permanent link

Wed, 21 Aug 2013

Casualties of credit

I just finished reading Carl Wennerlind’s book Casualties of Credit about the English financial revolution in the late seventeenth century. Much has been written about this period including of course the seminal paper by Douglas North and Barry Weingast on “Constitutions and commitment” (Journal of Economic History, 1989). Yet, I found a lot of material in the book new and highly illuminating.

Especially interesting was the description of the crisis of 1710 – which I think was the first instance in history of the bond market trying to arm twist the government to change its policies. I was also fascinated by the discussion about how Isaac Newton used his vast talents to hunt down coin clippers and counterfeiters, and then ruthlessly sent them to the gallows. I knew that apart from inventing calculus and much of physics, Newton had time to dabble in alchemy, but I had thought that his position as Master of the Mint was a sinecure. Well Newton chose not to treat it as a sinecure.

Posted at 15:41 on Wed, 21 Aug 2013     View/Post Comments (1)     permanent link

Sun, 18 Aug 2013

Quadrillion mantle passes from Italy to Japan after a decade

In the 1990s, we used to joke that the word quadrillion was invented to measure the Italian pubic debt. The introduction of the euro put an end to this joke. Italy’s public debt is currently “only” around two trillion euros, but it would be around four quadrillion lire at 1999 exchange rates. After a gap of more than a decade, the mantle has passed to Japan whose public debt crossed the quadrillion yen mark recently.

The only other important monetary amount that I am aware of that could be in the quadrillion range is the total outstanding notional value of all financial derivatives in the world. The BIS estimate (which is perhaps conservative) for this is only around $600 trillion, but some other estimates (which are perhaps exaggerated) put it in the range of $1,200 – $1,500 trillion.

Posted at 13:38 on Sun, 18 Aug 2013     View/Post Comments (0)     permanent link

Sun, 04 Aug 2013

Do regulators understand countervailing power in markets?

Practitioners understand the importance of countervailing power in keeping markets clean. The biggest obstacle that a would-be manipulator faces is a big player on the opposite side with the incentives and ability to block the attempted manipulation. Without that countervailing power, the regulator would be stretched very thin trying to combat the myriad games that are being played out in the market at any point of time. But regulators seem to be oblivious of this completely and often step in to curb the countervailing power without realizing that they are allowing people on the other side a free run.

This was highlighted yet again by a recent order of the UK Financial Conduct Authority (FCA), the successor to the Financial Services Authority (FSA). The FCA fined Michael Coscia for a trading strategy that made money at the cost of high frequency traders (HFTs).

HFTs often try to trade in front of other people. When the HFT suspects that a large trader is trying to buy (sell), the HFT tries to buy (sell) immediately before the price has gone up (down), and then tries to turn around to sell to (buy from) the large trader at an inflated (depressed) price. Michael Coscia created a trading strategy designed to give the HFTs a taste of their own medicine in the crude oil market. He placed a set of large orders designed to fool the HFTs into thinking that he was trying to sell a big block. When the HFTs began front running his purported large sell order, Coscia turned around and bought some crude from them at below market prices. He then performed the whole operation in reverse, fooling the HFTs into thinking that there was a large buy order in the market. When they tried to front run that buy order, Coscia sold the crude (that he had bought in the previous cycle) back to the HFTs.

The FCA thinks that Corcia violated the exchange rules which provided that “it shall be an offence for a trader or Member to engage in disorderly trading whether by high or low ticking, aggressive bidding or offering or otherwise.” From a legal point of view, the FCA is probably quite correct. But the net effect of the action is to neutralize the kind of trading strategies that would have held the HFTs in check. The FCA of course thinks that they are acting against HFTs because Corcia’s trading strategy also involved high frequency trading.

Posted at 14:29 on Sun, 04 Aug 2013     View/Post Comments (4)     permanent link

Thu, 25 Jul 2013

Legal theory of finance

The Journal of Comparative Economics (subscription required) has a special issue on Law in Finance (The CLS Blue Sky Blog has a series of posts summarizing and commenting about this work – see here, here and here). The lead paper in the special issue by Katharina Pistor presents what she calls the Legal Theory of Finance (LTF); the other papers are case studies of different aspects of this research programme.

Most finance researchers are aware of the Law and Finance literature (La Porta, Shleifer, Vishny and a host of others), but Pistor argues that “Law & Finance is ... a theory for good times in finance, not one for bad times.” She argues that though finance contracts may appear to be clear and rigid, they are in reality in the nature of incomplete contracts because of imperfect knowledge and inherent uncertainty. When tail events materialize, it is desirable to rewrite the contracts ex post. This can be done in two ways: first by the taxpayer bailing out the losers, or by an elastic interpretation of the law.

One of the shrill claims of the LTF is that legal enforcement is much more elastic at the centre while being quite rigid at the periphery. Bail out is also more likely at the centre. I do not see anything novel in this observation which should be obvious to anybody who has not forgotten the first word of the phrase “political economy”. It should also be obvious to anybody who has read Shakespeare’s great play about finance (The Merchant of Venice), and noted how differently the law was applied to Jews and Gentiles. It has also been all too visible throughout the global financial crisis and now in the eurozone crisis.

Another persistent claim is that all finance requires the backstop of the sovereign state which is the sole issuer of paper money. This is in some sense true of most countries for the last hundred years or so though I must point out that the few financial markets that one finds in Somalia or Zimbabwe function only because they are not dependent on the state. The LTF claim on the primacy of the state was certainly not true historically. Until the financial revolution in the Holland and later England, merchants were historically more credit worthy than sovereigns. Bankers bailed out the state and not the other way around.

Most of the case studies in the special issue do not seem to be empirically grounded in the way that we have come to expect in modern finance. I was not expecting any fancy econometrics, but I did expect to see the kind of rich detail that I have seen in the sociology of finance literature. The only exception was the paper by Akos Rona-Tas and Alya Guseva on “Information and consumer credit in Central and Eastern Europe”. I learned a lot from this paper and will probably blog about it some day, but it seemed to be only tangentially about the LTF.

Posted at 13:40 on Thu, 25 Jul 2013     View/Post Comments (0)     permanent link

Sun, 14 Jul 2013

Dubious legal foundations of modern finance?

I have been reading a 2008 paper by Kenneth C. Kettering (“Securitization and Its Discontents: the Dynamics of Financial Product Development”) arguing that securitization is built on dubious legal foundations – specifically there are possible conflicts with aspects of fraudulent transfer law. Kettering argues that securitization is an example of a financial product that has become so widely used that it cannot be permitted to fail, notwithstanding its dubious legal foundations.

I am not a lawyer (and Kettering’s paper is over 150 pages long) and therefore I am unable to comment on the legal validity of his claims. But, I also recall reading Annelise Riles’s book Collateral Knowledge: Legal Reasoning in the Global Financial Markets (University of Chicago Press, 2011), which makes somewhat similar claims. But her ethnographic study was focused on Japan, and when I read that book, I had assumed that the problems were specific to that country.

Posted at 21:55 on Sun, 14 Jul 2013     View/Post Comments (1)     permanent link

Sun, 07 Jul 2013

Non discretionary portfolio management

Last month, the Reserve Bank of India (RBI) released draft guidelines on wealth management by banks. I have no quarrels with the steps that the RBI has taken to reduce mis-selling. My comments are related to something that they did not change:

4.3.2 PMS-Non-Discretionary The non-discretionary portfolio manager manages the funds in accordance with the directions of the client. Thus under Non-Discretionary PMS, the portfolio manager will provide advisory services enabling the client to take decisions with regards to the portfolio. The choice as well as the timings of the investment decisions rest solely with the investor. However the execution of the trade is done by the portfolio manager. Since in non-discretionary PMS, the portfolio manager manages client portfolio/funds in accordance with the specific directions of the client, the PMS Manager cannot act independently. Banks may offer non-discretionary portfolio management services.

... Portfolio Management Services (PMS)- Discretionary: The discretionary portfolio manager individually and independently manages the funds of each client in accordance with the needs of the client. Under discretionary PMS, independent charge is given by the client to the portfolio manager to manage the portfolio/funds. ... Banks are prohibited from offering discretionary portfolio management services. (emphasis added)

I am surprised that regulators have learnt nothing from the 2010 episode in which an employee of a large foreign bank was able to misappropriate billions of rupees from high net worth individuals including one of India’s leading business families. (see for example, here, here and here).

My takeaway from that episode was that discretionary PMS is actually safer and more customer friendly than non discretionary PMS. After talking to numerous people, I am convinced that the so called non-discretionary PMS is pure fiction. In reality, there are only two ways to run a large investment portfolio:

  1. The advisory model where the bank provides investment advice and the client takes investment decisions and also handles execution, custody and accounting separately.
  2. The de facto discretionary PMS where the bank takes charge of everything. The fiction of a non-discretionary PMS is maintained by the customer signing off on each transaction often by signing blank cheques and other documents.

When you think carefully about it, the bundling of advice, execution, custody and accounting without accountability is a serious operational risk. One could in fact argue that the RBI should ban non-discretionary PMS and allow only discretionary PMS. Discretionary PMS is relatively safe because the bank has unambiguous responsibility for the entire operational risk.

The only argument for non-discretionary PMS might be if the PMS provider is poorly capitalized or otherwise not very reliable. But in this case, the investor should be imposing strict segregation of functions and should never be entrusting advice, execution, custody and accounting to the same entity.

Posted at 13:59 on Sun, 07 Jul 2013     View/Post Comments (4)     permanent link

Tue, 02 Jul 2013

Consumer protection may be a bigger issue than systemic risk

Since the global financial crisis, policy makers and academics alike have focused attention on systemic risk, but consumer protection is an equally big if not bigger issue that has not received equal attention. John Lanchester has a long (6700 word) essay in the London Review of Books, arguing that the mis-sold PPI (payment protection insurance) scandal in the UK was bigger than all other banking scandals – the London Whale, UBS (Adoboli), HBOS, Libor rigging and several others.

Lanchester argues the case not only because the costs of the PPI scandal could go up to £16-25 billion ($24-37 billion), but also because it happened at the centre of the banks’ retail operations and involved a more basic breach of what banking is supposed to be about. Interestingly, the huge total cost of the scandal is the aggregation of small average payouts of only £2,750 to each affected customer indicating that the mis-selling was so pervasive as to become an integral part of the business model itself.

Posted at 12:40 on Tue, 02 Jul 2013     View/Post Comments (0)     permanent link

Starred items from Google Reader

Updated: In the comments, Maries pointed me to Mihai Parparita’s Reader is Dead tools (see also here and here). Though Google Reader has officially shut down, it is still accessible, Mihai’s tools are still working, and I was able to create a multi-GB archive of everything that existed in my Google Reader. But the tools required to read this archive are still under development. So in the meantime, I still need my old code and may be more such code to read all the XML and JSON files in these archives.

With Google Reader shutting down, I have been experimenting with many other readers including Feedly and The Old Reader. Since many feed readers are still being launched and existing readers are being improved, I may keep changing my choice over the next few weeks. Importing subscriptions from Google Reader to any feed reader is easy using Google Takeout. The problem is with the starred items. I finally sat down and wrote a python script that reads the starred.json file that is available from Google Takeout and writes out an html file containing all the starred items.

Python’s json library makes reading and parsing the json file a breeze. By looking at some of the entries, I think I have figured out the most important elements of the structure. I am not sure that I have understood everything, and so suggestions for improving the script are most welcome.

Where the original feed does not contain the entire post, but only a summary, ideally I would like to follow the link, convert the web page to PDF and add a link pointing to the converted PDF file. This would protect against link rot. I tried doing this with wkhtmltopdf but I was not satisfied with the quality of the conversion. Any suggestions for doing this would be most welcome. Ideally, I would like to use Google Chrome’s ability to print a web page as PDF, but I do not find any command line options to automate this from within the python script.

Posted at 12:07 on Tue, 02 Jul 2013     View/Post Comments (2)     permanent link

Tue, 25 Jun 2013

What is the worth of net worth?

The Securities and Exchange Board of India (SEBI) announced today:

Presently, mutual funds are not allowed to appoint a custodian belonging to the same group, if the sponsor of the mutual fund or its associates hold 50 per cent or more of the voting rights of the share capital of such a custodian or where 50 per cent or more of the directors of the custodian represent the interests of the sponsor or its associates.

The Board has decided that the custodian in which the sponsor of a mutual fund or its associates are holding 50 percent or more of the voting rights of the share capital of the custodian, would be allowed to act as custodian subject to fulfilling the following conditions i.e. (a) the sponsor should have net worth of atleast Rs.20,000 crore at all points of time, ...

To provide a perspective on this, the last reported net worth of Lehman was $19.283 billion which is about five times the Rs.20,000 crore stipulated in the above announcement. (The Lehman figure is from the quarterly 10-Q report filed by Lehman on July 10, 2008 about two months before it filed for bankruptcy.)

Even assuming that the reported net worth is reliable, what I fail to understand is the implicit assumption in the world of finance that wealthy people are somehow more honest than poor people. As far as I am aware, the evidence for this is zero. This widely prevalent view is simply the result of intellectual capture by the plutocracy.

Capital in finance has only function – to absorb losses. I would have understood if SEBI had proposed that a variety of sins of the custodian would be forgiven if it (the custodian and not its sponsor) had a ring fenced net worth of Rs.20,000 crore invested in high quality assets.

Posted at 21:37 on Tue, 25 Jun 2013     View/Post Comments (0)     permanent link

Sat, 15 Jun 2013

CBOE 2013 versus BSE 1993

Earlier this week, the US SEC imposed a fine on the Chicago Board Options Exchange (CBO) for conduct reminiscent of what used to happen in the Bombay Stock Exchange (BSE) two decades ago. In the early 1990s, the BSE board was dominated by broker members, and allegations of favouritism, conflict of interest and neglect of regulatory duties were very common. At that time, many of us believed that these were the kinds of problems that the US SEC had solved way back in the late 1930s under Chairman Douglas. India might have been six decades behind the US, but it is widely accepted that security market reforms in the 1990s solved this problem in India, though this solution might have created a different set of problems.

The SEC order reveals problems at the CBOE which are very similar to those that the BSE used to have in the early 1990s:

In financially regulation, no problems are permanently solved – potential problems just remain dormant ready to resurface under more favourable conditions.

Posted at 14:29 on Sat, 15 Jun 2013     View/Post Comments (0)     permanent link

Sun, 09 Jun 2013

The Baselization of CCPs

There was a time when central counter parties (CCPs) used robust and coherent risk measures. Way back in 1999, Artzner et al. could write that “We do not know of organized exchanges using value at risk as the basis of risk measurement for margin requirements” (Artzner, Delbaen, Eber and Heath (1999), “Coherent measures of risk”, Mathematical Finance, 9(3), 203-228, Remark 3.9 on page 217). During the global financial crisis, while Basel style risk management failed spectacularly, exchanges and their CCPs coped with the risks quite well. (I wrote about that here and here).

But things are changing as CCPs gear up to clear OTC derivatives. The robust risk management of CCPs is not percolating to the OTC world; instead, the model-risk infested risk measures of the OTC dealers are spreading to the CCPs. The OTC Space has a nice discussion of how the systems that CCP use to margin OTC derivatives are different from the systems that they use for exchange traded derivatives. No, the CCPs are sticking to expected shortfall and are not jumping into value at risk. But their systems are becoming more model dependent, more dynamic (and therefore procyclical) and more sensitive to recent market conditions. These are the characteristics of Basel (even with Basel’s proposed shift to expected shortfall), and these characteristics are gradually spreading to the CCP world.

I am not convinced that this is going to end well, but then CCPs are also rapidly becoming CDO-like (see my post here) and therefore their failure in some segments might not matter anymore.

Posted at 15:48 on Sun, 09 Jun 2013     View/Post Comments (0)     permanent link

Sat, 01 Jun 2013

The NASDAQ Facebook Fiasco and Open Sourcing Exchange Software

Last week, the US SEC issued an order imposing a $10 million fine on NASDAQ for the software errors that caused a series of problems during the Facebook IPO on May 18, 2012. I think the SEC has failed in its responsibilities because this order does nothing whatsoever to solve the problems that it has identified. The order reveals the complete cognitive capture of the SEC and other securities regulators worldwide by the exchanges that they regulate.

The entire litany of errors during the Facebook IPO demonstrates that critical financial market infrastructures like exchanges and depositories should be forced to publish the source code of the systems through which their rules and bylaws are implemented. Of course, the exchanges will complain about the dilution of their “intellectual property”. But the courts have whittled down the “intellectual property” embedded in standard-essential patents and this principle applies with even greater force to software which implements rules and bylaws that are effectively subordinate legislation. Financial regulators have simply fallen behind the times in this respect.

What is the point of an elaborate process of filing and approval for rule changes, if there is no equivalent process for the actual software that implements the rule? The SEC order shows several instances where the lack of disclosure or approval processes for software changes made a complete mockery of the disclosure or approval processes for the rules and regulations themselves:

The Facebook fiasco was itself the result of an infinite loop in the software. This infinite loop would almost certainly have been detected if the source code had been publicly released and discussed with the same attention to detail that characterizes rule changes.

The lack of well defined processes for software testing is revealed in this tidbit: “Given the heightened anticipation for the Facebook IPO, NASDAQ took steps during the week prior to the IPO to test its systems in both live trading and test environments. Among other things, NASDAQ conducted intraday test crosses in NASDAQ’s live trading environment, which allowed member firms to place dummy orders in a test security (symbol ZWZZT) during a specified quoting period. NASDAQ limited the total number of orders that could be received in the test security to 40,000 orders. On May 18, 2012, NASDAQ members entered over 496,000 orders into the Facebook IPO cross.” It should be obvious that the one thing that could have been anticipated prior to the Facebook IPO was the vastly greater volumes than in small time IPOs. Doing a test that excluded this predictable issue is laughable. Proper rules would have required the postponement of the IPO when the volume exceeded the tested capacity of the system.

It is my considered view that the SEC and other securities regulators worldwide are complicit in the fraud that exchanges perpetrate on investors in their greed to protect the alleged “intellectual property” embedded in their software. I have been writing about this for a dozen years now: (1, 2, 3, and 4). So the chances of anything changing any time soon are pretty remote.

Posted at 18:38 on Sat, 01 Jun 2013     View/Post Comments (2)     permanent link

Thu, 30 May 2013

St Petersburg once again

One and a half years ago, I blogged about a paper by Peters that purported to resolve the paradox by using time averages. I finally got around to writing this up as a working paper (also available at SSRN). The content is broadly similar to the blog post except for some more elaboration and the introduction of a time reversed St Petersburg game as a further rebuttal of the time resolution idea.

Posted at 12:37 on Thu, 30 May 2013     View/Post Comments (0)     permanent link

Tue, 28 May 2013

Currency versus stocks

In India we are accustomed to see the rupee and the stock market moving in the same direction as both respond to foreign investment flows. In recent weeks, the pattern has changed as a weakening rupee has coincided with a rising stock market. Another country with the same pattern is Japan where some commentators have argued that the pattern makes sense if foreign investors are hedging the currency risk while buying stocks. This is an interesting idea with potential relevance to India – foreigners can be long the private sector (equities) and short the government (currency).

Posted at 14:28 on Tue, 28 May 2013     View/Post Comments (4)     permanent link

Mon, 20 May 2013

Macroprudential policy or financial repression

Douglas J. Elliott, Greg Feldberg, and Andreas Lehnert published a FEDS working paper last week entitled The History of Cyclical Macroprudential Policy in the United States. In gory detail, the paper describes every conceivable credit restriction that the US has imposed at some time or the other over some eight decades. It appears to me that most of them are best characterized as financial repression and not macroprudential policy. If one adopts the authors’ logic, one could go back to the middle ages and describe the usury laws as macroprudential policy.

Some two decades ago, we thought that financial repression had been more or less eliminated in the developed world, and was being gradually eliminated in the developing world as well. Post crisis, as much of the developed world deals with the sustainability of sovereign debt, financial repression is back in fashion, and macroprudential regulation provides a wonderful figleaf.

Posted at 12:07 on Mon, 20 May 2013     View/Post Comments (0)     permanent link

Thu, 16 May 2013

The CDO'ization of everything

Six years ago, when the global financial crisis began, Collateralized Debt Obligations (CDOs) were regarded as the villains that were the source of all problems. Today, the clock has turned full circle, and CDO like structures have become the solution to all problems.

The biggest innovation in the CDO was actually a contractual bankruptcy process that is lightning fast and extremely low cost (see my blog posts on the Gorton-Metrick and Squire papers that argue this in detail). The world is gradually coming around to realizing that normal bankruptcy does not work for the financial sector and the contractual CDO alternative is far better. In 2006, I wrote that the invention of CDOs has made banks and other legacy financial institutions unnecessary. The crisis seems to be turning that speculation into reality.

Posted at 11:53 on Thu, 16 May 2013     View/Post Comments (1)     permanent link

Fri, 03 May 2013

Interest rate models and central bank corridors

In my blog post last month about interest rate models at the zero bound, I did not consider the effect of central bank corridor policies. I realized that this is an important omission when I looked at the decision of the European Central Bank (ECB) a couple of days ago to lower the main refinancing rate (the central rate in the corridor) by 0.25% and the marginal lending facility rate (the upper rate in the corridor) by 0.50%. Why was one rate lowered by twice as much as the other? The answer is that with the deposit rate (the lower rate in the corridor) stuck at zero since July 2012, the only way to keep the corridor symmetric is to set the upper rate to be exactly twice the central rate. So the marginal lending facility rate will always change by twice the change in the main refinancing rate!

Of course, despite the deeply (biologically) ingrained love of symmetry, central banks can decide to abandon symmetry and move the central and upper rates independently. In fact, the historical data shows that in the first three months of the ECB’s existence, the corridor was not symmetric around the central rate, but since April 1999, the symmetry has been maintained.

Modelling short term rates in a symmetric corridor floored at zero is problematic. The log normal model has problems because it does not allow rates to be zero. Yet it is the natural way to model the proportionate changes in the central and upper rates.

Posted at 13:46 on Fri, 03 May 2013     View/Post Comments (0)     permanent link

Sat, 27 Apr 2013

Seigniorage, Tobin tax, fiat money, gold and Bitcoin

It is obvious that fiat money leads to seigniorage income for the sovereign, but one would imagine that a decentralized open source money like Bitcoin (see my blog post earlier this month) would not allow anybody to earn seignorage income. When one examines the Bitcoin design, we find that it allows those with enough computing power to extract two forms of seigniorage income:

  1. In the early years of Bitcoin, computing power allows the mining of new bitcoins. This is pure seigniorage.
  2. When most of the coins have been mined, computing power can be used to charge transaction fees on every bitcoin transaction. This is also seigniorage income in the form of an all encompassing Tobin tax beyond the wildest dreams of the proponents of that tax.

Is this a design flaw or is it a necessary feature? After careful consideration, I think it is necessary. A monetary system can be sustained only if there are people with the incentive to invest in the maintenance of the system. In the case of fiat money, the sovereign expends considerable effort in preventing counterfeiting. One might think that commodity money like gold does not require such effort. But the historical evidence suggests otherwise:

  1. After the collapse of the Roman empire, “within a generation, by about A.D. 435, coin ceased to be used [in Britain] as a medium of exchange ... although many survived as jewellery, or were used for gifts or for compensation.” (Christine Desan, “Coin Reconsidered: The Political Alchemy of Commodity Money”, quoting Peter Spufford.) With nobody having enough seigniorage income to try and maintain the system, commodity money was simply re-purposed to non monetary uses, and Britain relapsed into a barter economy.
  2. Christine Desan also points out that a monetary system based on silver was reestablished centuries later by sovereigns who extracted seignorage income by charging a 5-10% spread between the mint and melting points of the metal.
  3. On the other hand, Luther and White have several papers showing that after the collapse of the Somalian government, the old currency continued to circulate and local warlords maintained the money supply by counterfeiting the old currency notes to earn seigniorage income.

All this suggests that any form of money (whether fiat, commodity or a decentralized open source money like Bitcoin) needs some form of seigniorage to sustain it.

Posted at 22:17 on Sat, 27 Apr 2013     View/Post Comments (0)     permanent link

Tue, 16 Apr 2013

Interest rate modelling at the zero lower bound

A long time ago, before the Libor Market Model came to dominate interest rate modelling, a lot of attention was paid to how interest rate volatility depended on the level of interest rates. If rate are moving up and down by 0.5% around a level of 3%, how much movement is to be expected when the level changes to 6%? One school of thought argued that rates would continue to fluctuate ±0.5%; this very conveniently allows the modeller to assume that rates follow the normal distribution. An opposing school argued that a fluctuation of ±0.5% around a level of 3% was actually a fluctuation of 16 of the level. Therefore when the level shifts to 6%, the fluctuation would be ±1% to preserve the same proportionality of 16 of the level. This was also convenient as modellers could assume that interest rates are log-normally distributed.

It was also possible to take a middle ground – the celebrated square root model related the fluctuations to the square root of the level. A doubling of the level from 3% to 6% would cause the fluctuation to rise by a factor of √2 from 0.5% to 0.71%. People generalized this even further by assuming that the fluctuations scaled as (level)λ where λ=0 gives the normal model, λ=1 leads to the log-normal, and λ=0.5 yields the square root model. Of course, there is no need to restrict oneself to just one of these three magic values. The natural thing for any statistician to do is to estimate λ from the data using standard maximum likelihood or other methods. Long ago, I did do such estimations for Indian interest rates.

The Libor Market Model killed this cottage industry. It was most natural to assume log normal distributions for the interest rates and then let the option implied volatility smile deal with departures from this distributional assumption. And there matters rested until the problem resurfaced when interest rates were driven down to zero after the global financial crisis. The difficulty is that zero is an inaccessible boundary point for a log normal process. A log normal process (geometric Brownian motion) can not reach zero (in any finite time) starting from any positive rate, and if you somehow started it out from zero, it could never leave zero (because the volatility becomes zero).

The regulatory push to mandate central clearing for OTC derivatives has turned this esoteric modelling issue into an important policy concern because central clearing counterparties (CCPs) have to set margins for a variety of interest rate derivatives where the modelling of volatility becomes a first order issue. A variety of different approaches are being taken. The OTCSpace blog links to a couple of practitioner oriented discussions on this subject (here and here). Among the solutions being proposed are the following:

Shift to a normal model
This would eliminate under margining at zero interest rates, but potentially create severe under margining at high rates.
Combine normal and log-normal fluctuations
The idea is that there are two sources of fluctuations in interest rates – one behaves in a “normal” and the other in a “log-normal” manner. This may be intractable for valuation purposes, but might be acceptable for risk modelling since it solves the under margining problem at both ends of the interest rate spectrum.
Interest rate plus a small constant is log normal
For example, assume that the fluctuations in interest are proportional to the level of rates +1%.

As an aside, I believe that the zero lower bound is actually a bound not on the interest rate, but on the contango on money. In other words, the zero lower bound is simply the proposition that money (being the unit of account itself) can neither be in contango nor in backwardation. The standard cost of carry model for futures pricing tells us that the contango on money is equal to the risk free interest rate PLUS the storage cost of money MINUS the convenience yield. It is this contango that is constrained to be zero.

If the convenience yield of money is larger than the storage costs (as it usually is in normal times), the contango is zero when the interest rate is positive. In an era of unlimited monetary easing, the convenience yield of money can become very small and the zero contango implies a slightly negative interest rate since the storage cost is not zero. For physical currency, the storage cost is high because of the need to guard against theft. For insured bank deposits, the bank needs to recoup deposit insurance in some form through various fees. Of course, uninsured bank deposits are not money – they are simply a form of haircut prone debt (think Cyprus). Actually, Cyprus makes one sceptical about whether even insured bank deposits are money.

Posted at 16:48 on Tue, 16 Apr 2013     View/Post Comments (2)     permanent link

Wed, 10 Apr 2013

Bitcoin, negative interest rates and the future of money

I believe that everybody who is interested in money should study the digital currency Bitcoin very carefully because monetary innovations can have long lasting consequences even when they fail miserably:

  1. China’s experiments with paper money ended in inflationary disaster (as almost all fiat money appear to do), but it succeeded in replacing China’s long standing bronze coin standard with a silver unit of account (see for example, Richard von Glahn (2010), “Monies of Account and Monetary Transition in China, Twelfth to Fourteenth Centuries”, Journal of the Economic and Social History of the Orient, 53(3), 463-505)
  2. Johan Palmstruch who brought paper money to Europe and founded the world’s first central bank (the Sveriges Riksbank of Sweden) was sentenced to death. Though he was reprieved, he still lost everything and ended up in jail.

There is little doubt in my mind that digital currencies represent a vast technical and conceptual advance over the currencies in existence today. This would remain true even if Bitcoin implodes in a collapsing bubble or is destroyed by technical flaws in its design or implementation.

Nemo at has an excellent ten part series providing a gentle introduction to all the mathematics that one needs to understand how Bitcoin works. This is a good starting point for somebody wanting to go on to Satoshi Nakamoto’s seminal paper introducing the idea of Bitcoin.

From a finance point of view, what is most interesting about Bitcoin is that it is perhaps the first currency to be designed with a strong deflationary bias. There is an upper limit on the number of bitcoins that can ever be created – even lost bitcoins cannot be replaced unlike normal central banks that replace worn out notes with newly printed ones. In paper currencies, if I lose a currency note, somebody else probably finds it, and so the note remains in circulation. By contrast, Bitcoin is so designed that if the owner loses a bitcoin, the “finder” cannot use it, and so the lost bitcoin ceases to exist for all practical purposes. (If you are puzzled by the apparently inconsistent capitalization of bitcoin/Bitcoin in this paragraph, you may want to read this).

While most fiat currencies end up printing notes in higher and higher denominations to combat inflation, Bitcoin is designed to combat deflation by using smaller and smaller denominations like milli bitcoins and micro bitcoins all the way down to the smallest unit named the Satoshi which is equivalent to 10 nano bitcoins. As a result, the zero interest rate lower bound could be an even more serious problem for Bitcoin than for existing currencies.

One theoretical possibility is that the deflation overshoots significantly so that the currency can experience a mild inflation from that point onward somewhat on the lines of the Dornbusch overshooting model. But for that to work on a sustained basis, there would need to be periodic bouts of intense episodic deflation. The sharp appreciation of the bitcoin in the last few weeks in response to the Cyprus crisis suggests one way in which this could happen, but that would be a nightmare to model.

Posted at 17:44 on Wed, 10 Apr 2013     View/Post Comments (5)     permanent link

Tue, 09 Apr 2013

Option pricing with bimodal distributions

Jack Schwager’s book Hedge Fund Market Wizards has a chapter on James Mai of Cornwall Capital in which Mai talks about seeking opportunities in mispriced options. Many of us know about Mai from Michael Lewis’ Big Short which described how Mai made money by betting against subprime securities. But in the Schwager book, Mai talks mainly about options. Specifically, at page 232, Mai discusses opportunities “where the market assigned normal probability distributions to situations that clearly had bimodal outcomes”.

At first reading, I thought that Mai was simply talking about fat tails and the true volatility being higher than the option implied volatility. But on closer reading, this does not appear to be the case. In another section of the interview, Mai talks about the market under estimating the volatility of the distribution, while at another point, he describes the market making mistakes in the mean of the option implied distribution. So it does appear that Mai is distinguishing between errors in the mean, the volatility and the shape of the distribution.

This set me thinking about whether the bimodality of the distribution would make a big difference if the market assumes a (log) normal distribution with the correct mean and variance. Bimodality is very different from fat tails. In fact, if the distribution around each of the two modes is tight, then the tails are actually very thin. The departure from normality is actually a hollowing out of the middle of the distribution. For example, one may believe that a stock would either go to near zero (bankruptcy) or would double (if the risk of bankruptcy is eliminated) – the probability that the stock would remain close to the current level may be thought to be quite small. Mai himself discusses such an example.

To understand the phenomenon, let us take an extreme case of bimodality where there are actually only two outcomes. For simplicity, I assume that the risk free rate is zero. To facilitate comparison with the log normal distribution, I assume that the distribution of log asset prices is symmetric. If the current asset price is equal to 1, then by log symmetry, the two outcomes must be H and 1H. Since the two possible outcomes of the log price are ± ln H, the volatility is ln H assuming that the option maturity is 1. The risk neutral probabilities of the two outcomes (p and 1 − p) are easy to compute. Since the risk free rate is zero, p H + (1 − p1H = 1 implying that p = 1 ⁄ (1 + H) and 1 − p = H ⁄ (1 + H). (Unless H is quite large, these probabilities are not very far from 12).

With all these computations in place, it is straightforward to compare the true bimodal option price with that obtained by the Black Scholes formula using the correct volatility. The plot below is for H = 1.2.

Plot of bimodal option price versus Black Scholes Price

It might appear that the impact of the bimodal distribution is quite small. However, the important question is what is the expected return from buying an option at the wrong (Black Scholes) price in the market and holding it to maturity. The plot below shows that the best strategy is to buy an option with a strike about 6% out of the money. This earns a return of almost 31% (there is a 45% chance of earning a return of 188% and a 55% chance of losing 100%).

Plot of expected returns from buying call at Black Scholes Price

The bimodal example tells us that even with thin tails and no under estimation of volatility (no Black Swan events), there can be significant opportunities in the option market arising purely from the shape of the distribution. How would one detect whether the market is already implying a bimodal outcome? This is easily done by looking at the volatility smile. If the market is using a bimodal distribution, the volatility smile would be an inverted U shape which is very different from that normally observed in most asset markets.

Plot of expected returns from buying call at Black Scholes Price

Posted at 22:19 on Tue, 09 Apr 2013     View/Post Comments (4)     permanent link

Sat, 30 Mar 2013

Financial Sector Legislative Reforms Commission

The Financial Sector Legislative Reforms Commission submitted its report a few days ago. The Commission also submitted a draft law to replace many of the financial sector laws in India. Since I was a member of this Commission, I have no comments to add.

Posted at 15:06 on Sat, 30 Mar 2013     View/Post Comments (0)     permanent link

Mon, 25 Mar 2013

Big data crushes the regulators

I have repeatedly blogged (for example, here and here) about the urgent need for financial regulators to get their act together to deal with the big data generated by the financial markets that these regulators are supposed to regulate. The reality however is that the regulators are steadily falling behind.

Last week, Commissioner Scott D. O’Malia of the US Commodities and Futures Trading Commission delivered a speech in which he admitted that “Big Data Is the Commission’s Biggest Problem ”. This is what he had to say (emphasis added):

This brings me to the biggest issue with regard to data: the Commission’s ability to receive and use it. One of the foundational policy reforms of Dodd-Frank is the mandatory reporting of all OTC trades to a Swap Data Repository (SDR). The goal of data reporting is to provide the Commission with the ability to look into the market and identify large swap positions that could have a destabilizing effect on our markets. Since the beginning of 2013, certain market participants have been required to report their interest rate and credit index swap trades to an SDR.

Unfortunately, I must report that the Commission’s progress in understanding and utilizing the data in its current form and with its current technology is not going well. Specifically, the data submitted to SDRs and, in turn, to the Commission is not usable in its current form. The problem is so bad that staff have indicated that they currently cannot find the London Whale in the current data files. Why is that? In a rush to promulgate the reporting rules, the Commission failed to specify the data format reporting parties must use when sending their swaps to SDRs. In other words, the Commission told the industry what information to report, but didn’t specify which language to use. This has become a serious problem. As it turned out, each reporting party has its own internal nomenclature that is used to compile its swap data.

The end result is that even when market participants submit the correct data to SDRs, the language received from each reporting party is different. In addition, data is being recorded inconsistently from one dealer to another. It means that for each category of swap identified by the 70+ reporting swap dealers, those swaps will be reported in 70+ different data formats because each swap dealer has its own proprietary data format it uses in its internal systems. Now multiply that number by the number of different fields the rules require market participants to report.

To make matters worse, that’s just the swap dealers; the same thing is going to happen when the Commission has major swap participants and end-users reporting. The permutations of data language are staggering. Doesn’t that sound like a reporting nightmare? Aside from the need to receive more uniform data, the Commission must significantly improve its own IT capability. The Commission now receives data on thousands of swaps each day. So far, however, none of our computer programs load this data without crashing. This would seem odd with such a seemingly small number of trades. The problem is that for each swap, the reporting rules require over one thousand data fields of information. This would be bad enough if we actually needed all of this data. We don’t. Many of the data fields we currently receive are not even populated.

Solving our data dilemma must be our priority and we must focus our attention to both better protect the data we have collected and develop a strategy to understand it. Until such time, nobody should be under the illusion that promulgation of the reporting rules will enhance the Commission’s surveillance capabilities. As Chairman of the Technology Advisory Committee, I am more than willing to leverage the expertise of this group to assist in any way I can.

The regulators have only themselves to blame for this predicament. As I pointed out in a blog post nearly two years ago, the SEC and the CFTC openly flouted the express provision in the Dodd Frank Act to move towards algorithmic descriptions of derivatives. I would simply repeat what I wrote then:

Clearly, the financial services industry does not like this kind of transparency and the regulators are so completely captured by the industry that they will openly flout the law to protect the regulatees.

Posted at 19:51 on Mon, 25 Mar 2013     View/Post Comments (0)     permanent link

Sun, 17 Mar 2013

JPMorgan London Whale and Macro Hedges

Last week, the US Senate Permanent Subcommittee on Investigations released a staff report on the London Whale trades in which JPMorgan Chase lost $6.2 billion last year. The 300 page report puts together a lot of data that was missing in the JPMorgan internal task force report which was published in January.

Unsurprisingly, the Senate staff report takes a very critical view of the JPMorgan trades which the bank’s chairman described in a conference call last May as a “bad strategy ... badly executed ... poorly monitored.” Where I think the staff report goes overboard is in describing even the original relatively simple hedging strategy that JPMorgan adopted during the global financial crisis (well before the complete corruption of the strategy in late 2011 and early 2012).

The staff report says:

A number of bank representatives told the Subcommittee that the SCP was intended to provide, not a dedicated hedge, but a macro-level hedge to offset the CIO’s $350 billion investment portfolio against credit risks during a stress event. In a letter to the OCC and other agencies, JPMorgan Chase even contended that taking away the bank’s ability to establish that type of hedge would undermine the bank’s ability to ride out a financial crisis as it did in 2009. The bank also contended that regulators should not require a macro or portfolio hedge to have even a “reasonable correlation” with the risks associated with the portfolio of assets being hedged. The counter to this argument is that the investment being described would not function as a hedge at all, since all hedges, by their nature, must offset a specified risk associated with a specified position. Without that type of specificity and a reasonable correlation between the hedge and the position being offset, the hedge could not be sized or tested for effectiveness. Rather than act as a hedge, it would simply function as an investment designed to take advantage of a negative credit environment. That the OCC was unable to identify any other bank engaging in this type of general, unanchored “hedge” suggests that this approach is neither commonplace nor useful

I think everything about this paragraph is wrong and indeed perverse.

  1. What the crisis taught us is that tail risks are more important than any other risks and far from criticising tail hedges, policy makers should be doing everything possible to encourage them. That the US regulators could not find any other bank that implemented such tail hedges speaks volumes about the complacency of most bank managements. It is those banks that deserve to be criticized.
  2. We do not need correlations to size or test the effectiveness of macro hedges. Consider for example hedging a diversified equity portfolio with deep out of the money puts. For a complete tail hedge, the notional value of the put would be equal to the value of the portfolio itself. A beta equal to one might be a perfectly reasonable assumption for a diversified portfolio since a precise estimate of the tail beta might not be very easy. There is no need to compute a correlation between the put value and the portfolio value to determine the effectiveness of the hedge. Even the correlation between the index and the equity portfolio is not too critical because in a crisis, correlations can be expected to go to one.
  3. A put option of the kind described above is not an investment designed to take advantage of a stock market crash. Viewed as an investment, the most likely return on a deep out of the money put option is -100% (the put option expires worthless), just as the most likely return on a fire insurance policy is -100% because there are no fires and no insurance claims.

I think the problem with the JPMorgan hedges as they metamorphosed during 2011 was something totally different. The key is a statement that the JPMorgan Chairman made in the May 2012 conference call after the losses became clear:

It was there to deliver a positive result in a quite stressed environment and we feel we can do that and make some net income.

Sorry, tail hedges do not produce income, they cost money. Any alleged tail hedge that is expected to earn income under normal conditions is neither a hedge nor a speculative investment – it is just a disaster waiting to happen.

Posted at 17:25 on Sun, 17 Mar 2013     View/Post Comments (2)     permanent link

Wed, 06 Mar 2013

Is India experiencing incipient capital flight?

A number of phenomena we observe in India in the last few years can be interpreted as incipient capital flight:

  1. Gold imports have risen sharply not only in value terms but also in terms of quantity. The nature of the gold demand has also changed. In recent years, we have been seeing a significant amount of gold being bought by the rich as an investment. A poor household buying gold jewelry could be interpreted as a form of social security, but a rich household buying gold bars and biscuits is a form of capital flight. Instead of converting INR into USD or CHF, many rich investors are converting INR into XAU.
  2. Many Indian business groups are investing more outside India than in India. Many of them are openly justifying it on the ground that the investment climate in India is poor. This is of course a form of capital flight.
  3. It is difficult to explain India’s large current account deficit and poor export growth solely on the basis of low growth in the developed world. First, many of our competitors in Asia and elsewhere are posting large trade surpluses in the same environment. Second, the depreciation of the Indian rupee has improved competitiveness of Indian companies in world markets. Indian companies used to complain loudly about their lack of competitiveness when the dollar was worth only 40 rupees, but with the dollar fetching 55 rupees, these complaints have disappeared. I fear that some of the current account deficit that we see today is actually disguised capital flight via under-invoicing of exports and over-invoicing of imports.

While economists have focused on the impossible trinity (open capital account, independent monetary policy and fixed exchange rates), I am more concerned about the unholy trinity that leads to full blown capital flight. This unholy trinity has three elements: (a) a de facto open capital account, (b) poor perceived economic fundamentals and (c) heightened political uncertainty. I believe that the first two elements of this unholy trinity are already in place; we can only hope that the 2014 elections do not deliver the third element.

While our policy makers keep up the pretence that India has a closed capital account, the reality is that during the last decade, the capital account has in fact become largely open. Outward capital flows were largely opened up by liberalizing outward FDI and allowing every person to remit $200,000 every year for investment outside India. This means that the first element of the unholy trinity (an open capital account) has been in place for sometime now. If the other two elements were also to materialize, a full blown capital flight is perfectly conceivable. India’s reserves may appear comfortable in terms of number of months of imports, but in an open capital account, this is not a relevant metric. What is relevant is that India’s reserves are about 20% of the money supply (M3), and in a full blown capital flight, a large part of M3 is at risk of fleeing the country.

Posted at 14:04 on Wed, 06 Mar 2013     View/Post Comments (2)     permanent link

Mon, 04 Mar 2013

More on 2014 as 1994 redux

Last week, I wrote a blog post on how 2014 may witness the same withdrawal of capital flows from emerging markets as was seen when the US Fed tightened interest rates in 1994. Over the weekend (India time), the Fed published a speech by Chairman Ben Bernanke which spells out the issues with surprising bluntness. The key points as I see it in this speech are:

Bernanke is clearly warning US financial institutions to prepare for the coming bond market sell-off. It is not Bernanke’s job to warn emerging markets, but to those emerging market policy makers who read the speech, the message is loud and clear – it is time for serious preparation.

Posted at 19:49 on Mon, 04 Mar 2013     View/Post Comments (0)     permanent link

Wed, 27 Feb 2013

Looking at 2014 through the prism of 1994

Unless the United States shoots itself in the foot during the fiscal negotiations, it could conceivably be on the cusp of a recovery. There is a serious possibility that the unemployment rate starts falling towards 7%, and the US Fed begins to consider unwinding some of its unconventional monetary easing measures. Unconventional monetary policy is equivalent to a highly negative policy rate, and so a substantial monetary tightening can happen well before the Fed starts raising the Fed Funds rate.

The situation is reminiscent of 1994 when the US Fed tightened monetary policy as the economy recovered from the recession of the early 1990s. This monetary tightening is best known for the upheaval that it caused in the US bond markets, but the turbulence in US Treasuries lasted only a few months. The more lasting impact was on emerging markets as higher US yields dampened capital flows to emerging economies:

History never repeats itself (though as Mark Twain remarked, it sometimes rhymes). Yet, there is reason to fear that a normalization of interest rates in the US in the coming year could be destabilizing to many emerging markets which are today bathed in the tide of liquidity unleashed by the US Fed and other global central banks. India, in particular, has become overly addicted to foreign capital flows to cover its large current account deficit, and any retrenchment of these flows in response to better opportunities in the US could be quite painful.

Posted at 16:31 on Wed, 27 Feb 2013     View/Post Comments (3)     permanent link

Fri, 22 Feb 2013

Indian Gold ETFs become Gold ETNs

Last week, the Securities and Exchange Board of India allowed Indian gold Exchange Traded Funds (ETFs) to deposit their gold with a bank under a Gold Deposit Scheme instead of holding the gold in physical form. In the Gold Deposit Scheme, the bank does not act as a custodian of the gold. Instead, the bank lends the gold out to jewellers (and others) and promises to repay the gold on maturity.

In my view, use of the Gold Deposit Scheme will convert the Gold ETF into an ETN (Exchange Traded Note) or an ETP (Exchange Traded Product). The ETF does not hold gold – it only holds an unsecured claim against a bank and is thus exposed to the credit risk of the bank. If the bank were to fail, the ETF would stand in the queue as an unsecured creditor of the bank. The ETF therefore does not hold gold; it holds a gold linked note.

So far, the ETFs in India have been honest-to-God ETFs instead of the synthetic ETNs and ETPs that have unfortunately become so popular in Europe and elsewhere. With the new scheme, India has also joined the bandwagon of synthetic ETNs and ETPs masquerading as ETFs.

Truth in labelling demands that any ETF that uses the Gold Deposit Scheme should immediately be rechristened as an ETN. I also think that this is a change in fundamental attribute of the ETF and should require unit holder approval.

From a systemic risk perspective, I fail to see why this concoction makes sense at all. It unnecessarily increases the inter-connectedness of the banking and mutual fund industries and aggravates systemic risk. A run on the bank could induce a run on the ETF and vice versa. All this is in addition to the maturity mismatch issues described by Kunal Pawaskar.

I can understand the desire to put idle gold to work, but that does not require the intermediation of the bank at all. The ETF can lend the gold directly against cash collateral with daily mark to market margins. Even if it were desired to use the services of a bank, there are better ways to do this than to treat the ETF just like any other retail depositor. For example, the bank could provide cash collateral to the ETF with daily mark to market margins. As is standard in such contracts, a portion of the interest that the ETF earns on the cash collateral would be rebated back to the bank to cover its hedging and custody costs.

Posted at 13:29 on Fri, 22 Feb 2013     View/Post Comments (1)     permanent link

Fri, 08 Feb 2013

Disincentivising Cheques

More than a year ago, I blogged about how banks in India were perversely incentivising retail customers to use cheques instead of electronic transfers though the cost to the whole system of processing a cheque is much higher. I also hypothesized that it may well be rational for an individual bank to follow this perverse pricing under certain assumptions about price elasticity of demand.

Now the Reserve Bank of India has put out a discussion paper on Disincentivising Issuance and Usage of Cheques. It discusses at length ways to disincentivize individuals, institutions and government departments from using cheques. I was surprised to find however that there was no proposal to disincentivize the banks themselves. I think it makes a lot more sense to impose a significant charge on the paying bank for every cheque that is presented for clearing. It can be left to the banks to decide on whether (and how) to pass on the charge itself to some or all their customers. The more important purpose of the charge would be to incentivize the banks to educate and incentivize their customers and also to make their payment gateways more user friendly. Why should the charge be on the paying banks? Because, they own the customer who writes the cheque and also because they sit on the float when cheques are used.

Posted at 20:29 on Fri, 08 Feb 2013     View/Post Comments (2)     permanent link

Sun, 03 Feb 2013

Financial Risk: Perspectives from psychology and biology

During the last week, I found myself reading two different perspectives on financial risk:

  1. A fascinating paper by Anat Bracha and Elke Weber entitled “A Psychological Perspective of Financial Panic” (h/t Mostly Economics).
  2. A marvellous book by John Coates called The Hour Between Dog and Wolf: Risk Taking, Gut Feelings and the Biology of Boom and Bust.

The main thesis of Bracha and Weber is that:

... perceived control is a key concept in understanding mania and panic, as the need for control is a basic human need that contributes to optimism bias and affects risk perception more generally. Lack of control is therefore a violation of a basic need and will trigger episodes of panic and retreat to the safe and known.

The illusion of control refers to the human tendency to believe we can control or at least influence outcomes, even when these outcomes are the results of chance events.

The book by Coates is much more complex. The title itself requires a whole paragraph of explanation – it translates a French phrase that refers to the time around dusk when it is difficult to determine whether a shadow that one is seeing is that of a dog or a wolf, implying that the one could metamorphose into the other at any time.

From a biological perspective, it appears that:

... researchers have found that three types of situations signal threat and elicit a massive physiological stress response – those characterized by novelty, uncertainty and uncontrollability


Novelty, uncertainty and uncontrollability – the three conditions are similar in that when subjected to them we have no downtime, but are in a constant state of preparedness.

The uncontrollability that the psychologists emphasize is present in the biologist's description as well, but it does not seem to have a privileged position compared to other forms of risk – novelty and uncertainty. The biological response to all these forms of risk is the same – the body is flooded with stress hormones (mainly cortisol) which command the body to “shut down long term functions of the body and marshal all available resources, mainly glucose, for immediate use.”

More interesting is that the biological (unconscious) stress response closely mirrors the objective reality unlike the self reported (conscious) risk perception that is elicited by questionnaires. In his research with a groups of bond market traders, Coates asked the traders to report their level of stress at the end of each day. This self reported stress was totally unrelated either to their losing money or swings in their P&L or the volatility in the market. At the same time, their cortisol level faithfully measured the volatility that the individual traders were experiencing. That is not all – the average cortisol level of this group of traders very closely tracked the implied volatility of options related to the bonds that they were trading.

Coates links this finding to what biologists had found with rats. After several days of being placed in an objectively dangerous situation, the rats got habituated to the situation and became outwardly calm. However, their stress hormones reflected the stress that existed. Again, the unconscious biology reflected the objective reality while the conscious behaviour did not.

This seems to suggest that the “illusion of control” that Bracha and Weber talk about may be an illusion that afflicts only the conscious mind and not the unconscious mind that governs actual risk taking. Biology teaches us to assume that millions of years of evolution have perfected the more primitive (unconscious) parts of the brain to achieve near optimal behaviour (at least relative to the original environment). The more recent (conscious) parts of the brain perhaps have still some way to go before reaching evolutionary perfection.

Posted at 10:58 on Sun, 03 Feb 2013     View/Post Comments (1)     permanent link

Fri, 01 Feb 2013

Sociology of the evolution of electronic trading

Donald MacKenzie has a couple of papers recently analyzing the evolution of electronic trading from a sociology of finance point of view. The first paper describes the emergence of ETNs in the United States beginning with the Island system which became Instinet and was ultimately acquired by Nasdaq. The second paper describes the rise of electronic trading at the Chicago Mercantile Exchange.

I have in the past blogged about MacKenzie's previous works (here, and here) and find his approach useful. Others have been less impressed – one critic dismissed some of MacKenzie's previous works as “remarkable close-up studies ... without context ... all cogs and no car”. MacKenzie gets back at this criticism brilliantly at the end of the Island paper:

... historical change can involve shift in scale. In this paper, we have focussed on a small actor becoming big ... However, we could equally have told a story of big actors becoming small ... NYSE was a car, and has become a cog. Island was a cog that became a car ... Scales are indeed not stable, and cogs – and their histories – matter.

I knew most of the facts about Island from Scott Patterson's fascinating book on Dark Pools (subtitled “High-Speed Traders, A.I. Bandits, and the Threat to the Global Financial System”). Still, I learned a lot from MacKenzie's paper – the theoretical framework (particularly the idea of bricolage in the process of financial innovation) is quite valuable. I learned less from the paper on the CME, though, in this case, many of the facts were new to me.

Posted at 16:25 on Fri, 01 Feb 2013     View/Post Comments (0)     permanent link

Fri, 25 Jan 2013

Pamper the consumers or the computer programmers?

In case you thought that the answer to this question is obvious, you should read the report of the Reserve Bank of India’s Technical Committee to Examine Uniform Routing Code and A/c Number Structure. While recommending 26-digit bank account numbers (IBAN) in India, the Committee has this to say:

6.5.3 The main disadvantage (if we really have to pamper to customers as the information can be easily displayed/stored on debit cards and cell phones, besides the traditional paper diary/chit of paper) of this IBAN option is that though it entails least effort from banks and facilitates faster IBAN implementation, it provides a more complex payment system interface to customers due to long IBAN string. In other words, while efforts at banks’ end will be minimized, the customers will still have to remember and provide the long IBAN, including check digits, for their payment system activities. (emphasis added)

In other words, the convenience of the banks’ computers and their programmers trumps the convenience of hundreds of millions of consumers.

Another troubling passage in the report is the following discussion about why the branch code cannot be omitted in the bank code (IFSC) that is used for electronic fund transfers:

Upon enquiring with banks, it is learnt that many banks have not built any check-digit in their account numbers. Thus, any inward remittance which comes to a bank will be processed even if there is any mistake in account number, as long as that account number exists in the beneficiary bank. In the absence of check digit in account numbers, many banks depend on the branch identifier to avoid credit being afforded to wrong accounts. This is a significant irreversible risk where wrong beneficiary would get the credit and customer would have no recourse – legal or moral

The idea that a branch identifier is a substitute for a check digit is a serious mistake. Any reasonable check digit should catch all single digit errors and most (if not all) transposition errors (where two neighbouring digits are interchanged). These are the most common errors in writing or typing a long number (the other common error of omitting a digit is easily caught even without a check digit because the number of digits in an account number is fixed for each bank). The use of the branch identifier on the other hand is not guaranteed to catch the most commonly occurring errors – many single digit errors would lead to a valid account number at the same branch. With the increasing use of electronic fund transfers (which ignore the name of the account holder and rely only on the account number), I would have thought that it would make sense to insist that all account numbers should have a check digit instead of insisting that the IFSC code should include a branch code. But that would place a greater burden on some overworked computer programmers in some banks – and regulators apparently think that systems people (unlike consumers) must be pampered at all costs.

The problem is not confined to banking. In the financial markets also, the convenience of the programmers often dictates the nature of market regulation, and the systems people are able to hold the regulator to ransom by simply asserting that software changes are too difficult. On the other hand, whenever I go to websites like stackoverflow in search of answers to some computing problem, I am constantly amazed that there are so many people able and willing to find solutions to the most difficult problems. In an ideal world, I think regulators would require every systemically important financial organization to have senior systems people with a reputation of say 10,000 at stackoverflow or some such metric of competence and a “can do” attitude.

While we have “fit and proper” requirements for the top management of banks and financial organizations, Basel and IOSCO do not impose any “fit and proper” requirement on the systems people. I think this needs to change because so much of risk comes from poorly designed and poorly maintained software.

Posted at 16:33 on Fri, 25 Jan 2013     View/Post Comments (6)     permanent link

Mon, 21 Jan 2013

Single factor asset pricing model with leverage shocks

I have been reading an interesting paper by Tobias Adrian, Erkko Etula and Tyler Muir proposing a single factor asset pricing model that is based on shocks to securities broker-dealer leverage. The performance of this single factor model in pricing the Fama-French and momentum portfolios seems to be as good as that of the four factor model that includes the three Fama-French factors (market, size and value) and the momentum factor. In addition, the leverage factor model prices risk free bond portfolios as well as the four factor model augmented with a factor for interest rate level.

The results seem too good to be true and Bayesian theory teaches us that surprising results are likely to be false even if they are published in a top notch peer reviewed journal (see for example here or here). (I do recall the incident a couple of years ago when the Chen-Zhang q-factor papers became “defunct” after a timing error was identified in the initial work.) Having said that, the Adrian-Etula-Muir paper has been around since 2008 and was last revised in March 2012. Maybe, it has survived long enough to be taken seriously.

Another possible criticism is that the Adrian-Etula-Muir paper does all the empirical analysis using the Fama-French style size-value-momentum portfolios and not on the individual stocks themselves. Falkenblog goes so far as to say “What I suspect, though I haven’t done the experiment, is that if you regress individual stocks against this factor there will be a zero correlation with returns.” My own intuition is that the effect would not weaken so dramatically in going from portfolios to individual stocks. In any case, asset pricing tests have to be based on portfolios to obtain statistical power – the correct question to ask is whether the correlation with a random well diversified portfolio is likely to be high.

Adrian-Etula-Muir motivate their finding with the argument that broker-dealer leverage proxies for the health of the financial sector as a whole, and that because of limited participation and other factors, the wealth of the financial intermediaries matters more than that of the representative household in forming the aggregate Stochastic Discount Factor (SDF). This appears to me to be a stretch because even if we focus on intermediaries, leverage is not the same thing as wealth.

My initial reaction was that the leverage factor is actually a liquidity factor, but their results show that leverage shocks are largely uncorrelated with the shocks to the Pastor-Stambaugh (2003) liquidity factor.

I wonder whether the leverage factor may be a very elegant way of picking up time varying risk aversion so that the single factor model is close to the CAPM with time varying risk aversion. The empirical results show that the leverage factor mimicking portfolio is very close to being mean variance efficient. If this is so, then we may have a partial return to the cosy world from which Fama and French evicted us a couple of decades ago.

Posted at 11:36 on Mon, 21 Jan 2013     View/Post Comments (0)     permanent link

Sun, 20 Jan 2013

Financial stability, financial resilience and systemic risk

Last week, I found myself involved in a discussion arguing that systemic risk regulation is not the same as the pursuit of financial stability. This discussion helped to clarify my own thoughts on the subject.

There is no doubt that financial stability is currently a highly politically correct term: according to a working paper published by the International Monetary Fund (IMF) a year ago, the number of countries publishing financial stability reports increased from 1 in the mid 1990s to 50 by the mid 2000s and rose further to 80 in 2011. India and the United States have been among those that joined the bandwagon after the global financial crisis. Meanwhile the Financial Stability Board (which was first set up under a slightly different name after the Asian Crisis) has now been transformed into the apex forum for governing global financial regulation.

Yet, there has been a strong view that the pursuit of financial stability is a mistake. The best known proponent of this view was Hyman Minsky who was fond of saying that financial stability is inherently destabilizing. Post crisis, there has also been a great deal of interest in resilience as opposed to stability. The Macroeconomic Resilience blog has become particularly well known for arguing this case eloquently.

Rather than repeat what has been well articulated by these people, I have chosen to put together a totally politically incorrect table highlighting the contrast between financial stability and financial resilience.

Financial StabilityFinancial Resilience
Rigidity and resistance to changeAdaptability and survival amidst change
Stasis and StagnationDynamism and progress
Too big to failToo big to exist
Great ModerationNew normal
Alan GreenspanHyman Minsky

To my mind, systemic risk regulation is the pursuit not of financial stability but of financial resilience.

Posted at 17:37 on Sun, 20 Jan 2013     View/Post Comments (6)     permanent link

Fri, 11 Jan 2013

Why exchanges should be forced to use open source software

For more than a decade now, I have arguing for using open source software in critical parts of the financial system like stock exchanges (here and here) and depositories (here). At the risk of sounding like a broken record, I want to come back to this in the light of the following cryptic announcement from the BATS exchange in the US two days ago:

Please be advised that BATS has determined that upon an NBBO update on BATS’ BYX Exchange, Dividend Notifications BZX Exchange and BATS Options, there are certain cases where the Matching Engine will allow for a trade through or an execution of a short sale order at a price that is equal to or less than the NBB when a short sale circuit breaker is in effect under Regulation SHO. These cases result from the sequencing New Listings Short Sale Circuit Breakers of certain required events in the Matching Engine related to re-pricing and sliding orders in response to the NBBO update.

I found this almost impossible to understand as it is not clear whether the scenario “when a short sale circuit breaker is in effect” applies only to the second type of error (“execution of a short sale order at a price that is equal to or less than the NBB”) or also to the first type of error (“trade through” the NBBO). Focusing on the first type of error, we can make some headway by consulting the BATS exchange User Manual which describes the price sliding process with a numerical example:

Example of BATS Displayed Price Sliding:
1) Buy BATS-Only Order at 10.03
2) Order is re-priced and ranked 10.01 and displayed down to 10.00 (10.01 would lock the NBBO)
3) NBBO goes to 10.00X10.02
4) Order is re-displayed at 10.01 using its existing priority
5) NBBO goes to 10.01X10.03
6) Order remains unchanged (it’s only allowed to unslide once after entry)
Note: Order will always execute at 10.01 regardless of its display price at the time

But even with this explanation, it is hard to understand the precise nature of the software bug. My first thought was that in the above example, if the NBBO moved to 9.99X10.00, the sliding order might execute at 10.01 if it were matched against an incoming order at the BATS exchange order. On second thought, I ruled that out because it is too simple not to have been thought about during the software design. Maybe, it is a more complex sequence of events, but the terse announcement from the exchange does not really tell us what happened. It is interesting that even when admitting to a serious error, the exchange does not consider it essential to be transparent about the error.

Over a period of time, exchanges have been designing more and more complex order types. In some ways, these complex order types are actually the limiting case of co-location – instead of executing on the trader’s computer located close to the exchange server, the algorithm is now executing on the exchange server itself, and that too in the core order matching engine itself. The same business logic that favours extensive co-location also favours ever increasing complexity in order types.

In this situation, it makes sense to mandate open source implementations of the core order matching engine. As I wrote six years ago:

It is also evident that in a complex trading system, the number of eventualities to be considered while testing the trading software is quite large. It is very likely that even a reasonable testing effort might not detect all bugs in the system.

Given the large externalities involved in bugs in such core systems, a better approach is needed. The open source model provides such an alternative. By exposing the source code to a large number of people, the chances of discovering any bugs increase significantly. Since there are many software developers building software that interacts with the exchange software, there would be a large developer community with the skill, incentive and knowledge required to analyse the trading software and verify its integrity. In my view, regulators and self regulatory organizations have not yet understood the full power of the open source methodology in furthering the key regulatory goals of market integrity.

But it is not just the exchanges. Regulators too write very complex regulations which too should ideally be written in the form of open source software. Instead, regulators all over the world write long winded regulations and circulars which are open to many different implementations and which do not function as expected when they are most needed.

Posted at 12:23 on Fri, 11 Jan 2013     View/Post Comments (0)     permanent link

Sun, 06 Jan 2013

Liquidation efficiency of CCPs (clearing corporations)

Earlier this week, I wrote a blog post applying the Gorton-Metrick idea of contractual liquidation efficiency to CCPs or clearing corporations. After that, I came across an interesting paper by Richard Squire (December 2012) arguing that the only real benefit of a clearing house is speed and certainty of liquidation and that this benefit obtains even if the clearing house itself is insolvent.

Squire accepts the arguments of Pirrong and others that the risk reduction benefits of central clearing are dubious (risk reduction in one part of the system comes at the cost of greater risk elsewhere in the system). Yet CCPs are valuable because they speed up the bankruptcy process and give greater certainty to all creditors (even those who are outside the clearing house).

It is clear that Squire has a point. The worst part of the Lehman bankruptcy was that counter parties had their money trapped in the bankruptcy court for years without either liquidity or certainty.

Four years after Lehman filed for protection under Chapter 11, the Lehman estate still held $14.3 billion in restricted cash, which included $10.9 billion in a reserve fund for paying out unsecured claims. (Page 37)

Squire points out how the normal bankruptcy process is designed to be extremely slow:

To distribute assets among creditors, a bankruptcy trustee must do two things. First, she must determine what the assets are worth, which she can do through financial valuation methods or with an auction that converts the assets to cash. Second, she must determine the amount of the debtor’s liabilities, which requires her to collect all creditor proofs of claim and resolve challenges to their enforceability and amounts. Given these requirements, it is difficult to think of a slower rule for distributing debtor assets than the pro rata rule. Under that rule, each creditor is paid according to the ratio between the amount of his claim and the debtor’s total liabilities. It follows that all liabilities must be confirmed and valuated before any creditor can be paid. (Page 36)

The clearing house speeds up this process enormously and provides greater liquidity and certainty. More importantly, this is not at the cost of other creditors of the bankrupt entity:

Unlike netting’s purely redistributive consequences, its payout-acceleration benefit is not zero-sum. Thus, the faster payouts for the clearinghouse members are not the result of slower payouts for the outside creditors. To the contrary, netting simplifies the work of the failed member’s bankruptcy trustee, which might permit the outside creditors also to be paid more quickly than they would otherwise. ... And while the arithmetical amounts of their payouts will be reduced by netting’s redistributive effect, the loss may partly be neutralized by the fact that the smaller scope of the bankruptcy estate may save on administrative costs and hence leave more value left over for creditors. Netting therefore is clearly a source of value creation. (Page 38)

The most important part of the paper is the argument that the benefits of netting would remain even if the clearing house itself is bankrupt.

Whereas creditors typically insist on being paid in cash, they are generally willing to accept cancellation of their own debts as payment for their own claims. And netting within the clearinghouse increases the opportunities for this to occur. ... Because of netting, Firm A is, in effect, able to take [an IOU from Firm C] and force Firm B to accept it in satisfaction of Firm A’s debt to Firm B. And Firm B, in turn, can take the same IOU and use it to repay its $100 debt to Firm C. Since the IOU is now back in the hands of its issuer, it is cancelled. No cash has changed hands, and therefore none been paid into a bankruptcy estate. And because each transfer of the IOU occurs through setoff rights, the transfers can occur even if the clearinghouse is bankrupt. This capacity for a clearinghouse to transform a debt obligation into a medium of exchange as good as cash is of obvious social value during a liquidity shortage. (Page 42)

I am now even more convinced that CCPs (clearing houses) must be designed to fail gracefully. Many of them have done so through loss allocation rules for each segment that effectively cap the liability of the CCP and make it less likely that it goes bust. We must extend the scope of these mechanisms to make it almost impossible for a CCP to become bankrupt just as securitization waterfalls make it almost impossible for an SPV to become bankrupt. Such rules are the only way to prevent the need for bailing out the CCP and engendering moral hazard through the process.

If we see CCPs not as a magic bullet to eliminate risk, but as a legal mechanism to achieve fast bankruptcy with high legal certainty for payouts, then the CCP becomes more and more like a CDO than an over regulated financial infrastructure. This would be a great achievement because it solves the dilemma that forces regulators to either regulate CCPs as utilities and forgo the benefits of competition or allow free competition and see a race to the bottom in risk management. By pushing the risks of CCP failure back to the users of the CCP, a mandatory loss allocation mechanism (like a CDO waterfall clause), allows competition to work its usual magic without creating systemic risk or moral hazard. The world should then be able to withstand a credit event at even the largest CCPs like LCH.Clearnet, CME Clearing or Eurex Clearing. Similarly, India should then be able to withstand a credit event at its largest CCPs like CCIL or NSCCL.

Post crisis, regulators have expended much energy on resolution mechanisms to eliminate the “too big to fail” problem. I think resolution mechanisms need to draw upon lessons learnt from securitization and CDOs about how to make this work. I often say that the key purpose of resolution is not to ensure that firms do not die, but to ensure that when they do die, there are no stinking corpses. CDOs and securitization SPVs have shown how this can be done effectively – these methods have proven themselves on the ground and have stood the test of time. Instead of designing resolution mechanisms on a clean slate, regulators should take these proven methods and extend their scope and application to cover large swathes of the financial sector.

Posted at 19:05 on Sun, 06 Jan 2013     View/Post Comments (1)     permanent link

Tue, 01 Jan 2013

Contractual living wills and liquidation efficiency

Gary Gorton and Andrew Metrick published a fantastic paper last month on “Securitization” (NBER Working Paper 18611). This paper contains a wealth of information, a detailed survey of the literature and a number of very interesting theoretical ideas. What I found most interesting is the idea that the most important benefit of securitization could be a reduction in bankruptcy costs. In passing, Gorton and Metrick talk about “contractual living wills” a set of contractual arrangements in securitization that have some similarities to the living wills that are being proposed as mechanisms to enable easy resolution of banks in the post crisis regulatory reforms. I think this analogy is worth pursuing even further.

In a securitization, all the assets and liabilities are housed in a Special Purpose Vehicle (SPV) which is structured in such a way as to make bankruptcy all but impossible. Gorton and Metrick see this as a big part of the economic function of securitization:

... the SPV cannot become bankrupt. This was an innovation. That is, the design of SPVs to have this feature is an important part of the value of securitization. Moreover, it has economic substance. Since the cash flows are passive, there are no valuable control rights over corporate assets to be contested in a bankruptcy process. Thus, it is in all claimants’ interest to avoid a costly bankruptcy process. (Page 19)

If the assets perform badly and the cash flows from the assets are not sufficient to pay all the coupons, the SPV does not enter bankruptcy – instead the available funds are used to pay the senior claimants early while writing down the liabilities to the junior claimants. Gorton and Metrick call this a contractual living will (Page 8). But I think it is much more than the living wills that post crisis banks are being required to prepare for themselves. It is not just that the SPV waterfall rules are contractual and therefore self implementing unlike the wishful thinking that goes into the living wills of the banks. What is more important is that the SPV waterfall rules constitute a contractual bail-in arrangement whereby the junior claimants’ principal gets written down to restore the solvency of the SPV. Similarly liquidity problems are automatically addressed by extending maturities contractually. (It is not uncommon to see securitization structures in which the expected weighted average life of a securitization tranche is only 5 years, but its rated and legal final maturity is 30 years.)

Gorton and Metrick are right to point out that some of these things are easy to do because the cash flows of an SPV are passive and therefore there is no judgement required to manage them. The SPV is “brain dead” and is completely governed by contract. But I think that resolution of banks and other financial institutions can learn a lot from the SPV liquidation arrangements. Failed institutions can often be put in run-off mode where most of the management can be passive. Private ordering usually fares better than complex regulatory mechanisms.

It is also possible for a business segment to be put into SPV style liquidation arrangements (with near zero bankruptcy costs) while the rest of the institution runs normally. Many central counterparties (CCPs or clearing corporations) have framed rules under which if the losses in a particular segment exceeds a certain threshold, then loss allocation mechanisms kick in that would effectively shut down that segment – contractual bail-in eliminates bankruptcy. I think regulators should consider mandating such contractual provisions that make it impossible for a CCP to go bankrupt. CCPs should be allowed to fail, but the failure should not involve bankruptcy. Post crisis, many CCPs are beginning to clear very risky products that make it extremely likely that a large CCP in a G-7 country would fail in the next decade or so. Contractual living wills and contractual bail-ins would prevent such a failure from being a catastrophic event.

I think it is also possible to convert a failed bank into a CDO that is put into run-off mode with contractual provisions governing the loss allocations without any need for formal bankruptcy at all. Nearly seven years ago (well before the global crisis), I wrote in a blog post that “Having invented banks first, humanity found it necessary to invent CDOs because they are far more efficient and transparent ways of bundling and trading credit risk. Had we invented CDOs first, would we have ever found it necessary to invent banks?” Even if we do not want to replace all banks by CDOs, we can at least replace failed banks by CDOs that are “liquidation efficient” in Gorton and Metrick’s elegant phrase.

Posted at 20:51 on Tue, 01 Jan 2013     View/Post Comments (0)     permanent link