Prof. Jayanth R. Varma's Financial Markets Blog

Photograph About
Prof. Jayanth R. Varma's Financial Markets Blog, A Blog on Financial Markets and Their Regulation

© Prof. Jayanth R. Varma
jrvarma@iima.ac.in

Subscribe to a feed
RSS Feed
Atom Feed
RSS Feed (Comments)

Follow on:
twitter
Facebook
Wordpress

January
Sun Mon Tue Wed Thu Fri Sat
   
   
2013
Months
Jan
2012
Months

Powered by Blosxom

Fri, 25 Jan 2013

Pamper the consumers or the computer programmers?

In case you thought that the answer to this question is obvious, you should read the report of the Reserve Bank of India’s Technical Committee to Examine Uniform Routing Code and A/c Number Structure. While recommending 26-digit bank account numbers (IBAN) in India, the Committee has this to say:

6.5.3 The main disadvantage (if we really have to pamper to customers as the information can be easily displayed/stored on debit cards and cell phones, besides the traditional paper diary/chit of paper) of this IBAN option is that though it entails least effort from banks and facilitates faster IBAN implementation, it provides a more complex payment system interface to customers due to long IBAN string. In other words, while efforts at banks’ end will be minimized, the customers will still have to remember and provide the long IBAN, including check digits, for their payment system activities. (emphasis added)

In other words, the convenience of the banks’ computers and their programmers trumps the convenience of hundreds of millions of consumers.

Another troubling passage in the report is the following discussion about why the branch code cannot be omitted in the bank code (IFSC) that is used for electronic fund transfers:

Upon enquiring with banks, it is learnt that many banks have not built any check-digit in their account numbers. Thus, any inward remittance which comes to a bank will be processed even if there is any mistake in account number, as long as that account number exists in the beneficiary bank. In the absence of check digit in account numbers, many banks depend on the branch identifier to avoid credit being afforded to wrong accounts. This is a significant irreversible risk where wrong beneficiary would get the credit and customer would have no recourse – legal or moral

The idea that a branch identifier is a substitute for a check digit is a serious mistake. Any reasonable check digit should catch all single digit errors and most (if not all) transposition errors (where two neighbouring digits are interchanged). These are the most common errors in writing or typing a long number (the other common error of omitting a digit is easily caught even without a check digit because the number of digits in an account number is fixed for each bank). The use of the branch identifier on the other hand is not guaranteed to catch the most commonly occurring errors – many single digit errors would lead to a valid account number at the same branch. With the increasing use of electronic fund transfers (which ignore the name of the account holder and rely only on the account number), I would have thought that it would make sense to insist that all account numbers should have a check digit instead of insisting that the IFSC code should include a branch code. But that would place a greater burden on some overworked computer programmers in some banks – and regulators apparently think that systems people (unlike consumers) must be pampered at all costs.

The problem is not confined to banking. In the financial markets also, the convenience of the programmers often dictates the nature of market regulation, and the systems people are able to hold the regulator to ransom by simply asserting that software changes are too difficult. On the other hand, whenever I go to websites like stackoverflow in search of answers to some computing problem, I am constantly amazed that there are so many people able and willing to find solutions to the most difficult problems. In an ideal world, I think regulators would require every systemically important financial organization to have senior systems people with a reputation of say 10,000 at stackoverflow or some such metric of competence and a “can do” attitude.

While we have “fit and proper” requirements for the top management of banks and financial organizations, Basel and IOSCO do not impose any “fit and proper” requirement on the systems people. I think this needs to change because so much of risk comes from poorly designed and poorly maintained software.

Posted at 16:33 on Fri, 25 Jan 2013     View/Post Comments (6)     permanent link


Mon, 21 Jan 2013

Single factor asset pricing model with leverage shocks

I have been reading an interesting paper by Tobias Adrian, Erkko Etula and Tyler Muir proposing a single factor asset pricing model that is based on shocks to securities broker-dealer leverage. The performance of this single factor model in pricing the Fama-French and momentum portfolios seems to be as good as that of the four factor model that includes the three Fama-French factors (market, size and value) and the momentum factor. In addition, the leverage factor model prices risk free bond portfolios as well as the four factor model augmented with a factor for interest rate level.

The results seem too good to be true and Bayesian theory teaches us that surprising results are likely to be false even if they are published in a top notch peer reviewed journal (see for example here or here). (I do recall the incident a couple of years ago when the Chen-Zhang q-factor papers became “defunct” after a timing error was identified in the initial work.) Having said that, the Adrian-Etula-Muir paper has been around since 2008 and was last revised in March 2012. Maybe, it has survived long enough to be taken seriously.

Another possible criticism is that the Adrian-Etula-Muir paper does all the empirical analysis using the Fama-French style size-value-momentum portfolios and not on the individual stocks themselves. Falkenblog goes so far as to say “What I suspect, though I haven’t done the experiment, is that if you regress individual stocks against this factor there will be a zero correlation with returns.” My own intuition is that the effect would not weaken so dramatically in going from portfolios to individual stocks. In any case, asset pricing tests have to be based on portfolios to obtain statistical power – the correct question to ask is whether the correlation with a random well diversified portfolio is likely to be high.

Adrian-Etula-Muir motivate their finding with the argument that broker-dealer leverage proxies for the health of the financial sector as a whole, and that because of limited participation and other factors, the wealth of the financial intermediaries matters more than that of the representative household in forming the aggregate Stochastic Discount Factor (SDF). This appears to me to be a stretch because even if we focus on intermediaries, leverage is not the same thing as wealth.

My initial reaction was that the leverage factor is actually a liquidity factor, but their results show that leverage shocks are largely uncorrelated with the shocks to the Pastor-Stambaugh (2003) liquidity factor.

I wonder whether the leverage factor may be a very elegant way of picking up time varying risk aversion so that the single factor model is close to the CAPM with time varying risk aversion. The empirical results show that the leverage factor mimicking portfolio is very close to being mean variance efficient. If this is so, then we may have a partial return to the cosy world from which Fama and French evicted us a couple of decades ago.

Posted at 06:06 on Mon, 21 Jan 2013     View/Post Comments (0)     permanent link


Sun, 20 Jan 2013

Financial stability, financial resilience and systemic risk

Last week, I found myself involved in a discussion arguing that systemic risk regulation is not the same as the pursuit of financial stability. This discussion helped to clarify my own thoughts on the subject.

There is no doubt that financial stability is currently a highly politically correct term: according to a working paper published by the International Monetary Fund (IMF) a year ago, the number of countries publishing financial stability reports increased from 1 in the mid 1990s to 50 by the mid 2000s and rose further to 80 in 2011. India and the United States have been among those that joined the bandwagon after the global financial crisis. Meanwhile the Financial Stability Board (which was first set up under a slightly different name after the Asian Crisis) has now been transformed into the apex forum for governing global financial regulation.

Yet, there has been a strong view that the pursuit of financial stability is a mistake. The best known proponent of this view was Hyman Minsky who was fond of saying that financial stability is inherently destabilizing. Post crisis, there has also been a great deal of interest in resilience as opposed to stability. The Macroeconomic Resilience blog has become particularly well known for arguing this case eloquently.

Rather than repeat what has been well articulated by these people, I have chosen to put together a totally politically incorrect table highlighting the contrast between financial stability and financial resilience.

Financial StabilityFinancial Resilience
Rigidity and resistance to changeAdaptability and survival amidst change
Stasis and StagnationDynamism and progress
Pro-incumbentPro-competition
Too big to failToo big to exist
Great ModerationNew normal
Alan GreenspanHyman Minsky

To my mind, systemic risk regulation is the pursuit not of financial stability but of financial resilience.

Posted at 17:37 on Sun, 20 Jan 2013     View/Post Comments (6)     permanent link


Fri, 11 Jan 2013

Why exchanges should be forced to use open source software

For more than a decade now, I have arguing for using open source software in critical parts of the financial system like stock exchanges (here and here) and depositories (here). At the risk of sounding like a broken record, I want to come back to this in the light of the following cryptic announcement from the BATS exchange in the US two days ago:

Please be advised that BATS has determined that upon an NBBO update on BATS’ BYX Exchange, Dividend Notifications BZX Exchange and BATS Options, there are certain cases where the Matching Engine will allow for a trade through or an execution of a short sale order at a price that is equal to or less than the NBB when a short sale circuit breaker is in effect under Regulation SHO. These cases result from the sequencing New Listings Short Sale Circuit Breakers of certain required events in the Matching Engine related to re-pricing and sliding orders in response to the NBBO update.

I found this almost impossible to understand as it is not clear whether the scenario “when a short sale circuit breaker is in effect” applies only to the second type of error (“execution of a short sale order at a price that is equal to or less than the NBB”) or also to the first type of error (“trade through” the NBBO). Focusing on the first type of error, we can make some headway by consulting the BATS exchange User Manual which describes the price sliding process with a numerical example:

Example of BATS Displayed Price Sliding:
NBBO:
10.00X10.01
BATS:
10.00X10.02
1) Buy BATS-Only Order at 10.03
2) Order is re-priced and ranked 10.01 and displayed down to 10.00 (10.01 would lock the NBBO)
3) NBBO goes to 10.00X10.02
4) Order is re-displayed at 10.01 using its existing priority
5) NBBO goes to 10.01X10.03
6) Order remains unchanged (it’s only allowed to unslide once after entry)
Note: Order will always execute at 10.01 regardless of its display price at the time

But even with this explanation, it is hard to understand the precise nature of the software bug. My first thought was that in the above example, if the NBBO moved to 9.99X10.00, the sliding order might execute at 10.01 if it were matched against an incoming order at the BATS exchange order. On second thought, I ruled that out because it is too simple not to have been thought about during the software design. Maybe, it is a more complex sequence of events, but the terse announcement from the exchange does not really tell us what happened. It is interesting that even when admitting to a serious error, the exchange does not consider it essential to be transparent about the error.

Over a period of time, exchanges have been designing more and more complex order types. In some ways, these complex order types are actually the limiting case of co-location – instead of executing on the trader’s computer located close to the exchange server, the algorithm is now executing on the exchange server itself, and that too in the core order matching engine itself. The same business logic that favours extensive co-location also favours ever increasing complexity in order types.

In this situation, it makes sense to mandate open source implementations of the core order matching engine. As I wrote six years ago:

It is also evident that in a complex trading system, the number of eventualities to be considered while testing the trading software is quite large. It is very likely that even a reasonable testing effort might not detect all bugs in the system.

Given the large externalities involved in bugs in such core systems, a better approach is needed. The open source model provides such an alternative. By exposing the source code to a large number of people, the chances of discovering any bugs increase significantly. Since there are many software developers building software that interacts with the exchange software, there would be a large developer community with the skill, incentive and knowledge required to analyse the trading software and verify its integrity. In my view, regulators and self regulatory organizations have not yet understood the full power of the open source methodology in furthering the key regulatory goals of market integrity.

But it is not just the exchanges. Regulators too write very complex regulations which too should ideally be written in the form of open source software. Instead, regulators all over the world write long winded regulations and circulars which are open to many different implementations and which do not function as expected when they are most needed.

Posted at 12:23 on Fri, 11 Jan 2013     View/Post Comments (0)     permanent link


Sun, 06 Jan 2013

Liquidation efficiency of CCPs (clearing corporations)

Earlier this week, I wrote a blog post applying the Gorton-Metrick idea of contractual liquidation efficiency to CCPs or clearing corporations. After that, I came across an interesting paper by Richard Squire (December 2012) arguing that the only real benefit of a clearing house is speed and certainty of liquidation and that this benefit obtains even if the clearing house itself is insolvent.

Squire accepts the arguments of Pirrong and others that the risk reduction benefits of central clearing are dubious (risk reduction in one part of the system comes at the cost of greater risk elsewhere in the system). Yet CCPs are valuable because they speed up the bankruptcy process and give greater certainty to all creditors (even those who are outside the clearing house).

It is clear that Squire has a point. The worst part of the Lehman bankruptcy was that counter parties had their money trapped in the bankruptcy court for years without either liquidity or certainty.

Four years after Lehman filed for protection under Chapter 11, the Lehman estate still held $14.3 billion in restricted cash, which included $10.9 billion in a reserve fund for paying out unsecured claims. (Page 37)

Squire points out how the normal bankruptcy process is designed to be extremely slow:

To distribute assets among creditors, a bankruptcy trustee must do two things. First, she must determine what the assets are worth, which she can do through financial valuation methods or with an auction that converts the assets to cash. Second, she must determine the amount of the debtor’s liabilities, which requires her to collect all creditor proofs of claim and resolve challenges to their enforceability and amounts. Given these requirements, it is difficult to think of a slower rule for distributing debtor assets than the pro rata rule. Under that rule, each creditor is paid according to the ratio between the amount of his claim and the debtor’s total liabilities. It follows that all liabilities must be confirmed and valuated before any creditor can be paid. (Page 36)

The clearing house speeds up this process enormously and provides greater liquidity and certainty. More importantly, this is not at the cost of other creditors of the bankrupt entity:

Unlike netting’s purely redistributive consequences, its payout-acceleration benefit is not zero-sum. Thus, the faster payouts for the clearinghouse members are not the result of slower payouts for the outside creditors. To the contrary, netting simplifies the work of the failed member’s bankruptcy trustee, which might permit the outside creditors also to be paid more quickly than they would otherwise. ... And while the arithmetical amounts of their payouts will be reduced by netting’s redistributive effect, the loss may partly be neutralized by the fact that the smaller scope of the bankruptcy estate may save on administrative costs and hence leave more value left over for creditors. Netting therefore is clearly a source of value creation. (Page 38)

The most important part of the paper is the argument that the benefits of netting would remain even if the clearing house itself is bankrupt.

Whereas creditors typically insist on being paid in cash, they are generally willing to accept cancellation of their own debts as payment for their own claims. And netting within the clearinghouse increases the opportunities for this to occur. ... Because of netting, Firm A is, in effect, able to take [an IOU from Firm C] and force Firm B to accept it in satisfaction of Firm A’s debt to Firm B. And Firm B, in turn, can take the same IOU and use it to repay its $100 debt to Firm C. Since the IOU is now back in the hands of its issuer, it is cancelled. No cash has changed hands, and therefore none been paid into a bankruptcy estate. And because each transfer of the IOU occurs through setoff rights, the transfers can occur even if the clearinghouse is bankrupt. This capacity for a clearinghouse to transform a debt obligation into a medium of exchange as good as cash is of obvious social value during a liquidity shortage. (Page 42)

I am now even more convinced that CCPs (clearing houses) must be designed to fail gracefully. Many of them have done so through loss allocation rules for each segment that effectively cap the liability of the CCP and make it less likely that it goes bust. We must extend the scope of these mechanisms to make it almost impossible for a CCP to become bankrupt just as securitization waterfalls make it almost impossible for an SPV to become bankrupt. Such rules are the only way to prevent the need for bailing out the CCP and engendering moral hazard through the process.

If we see CCPs not as a magic bullet to eliminate risk, but as a legal mechanism to achieve fast bankruptcy with high legal certainty for payouts, then the CCP becomes more and more like a CDO than an over regulated financial infrastructure. This would be a great achievement because it solves the dilemma that forces regulators to either regulate CCPs as utilities and forgo the benefits of competition or allow free competition and see a race to the bottom in risk management. By pushing the risks of CCP failure back to the users of the CCP, a mandatory loss allocation mechanism (like a CDO waterfall clause), allows competition to work its usual magic without creating systemic risk or moral hazard. The world should then be able to withstand a credit event at even the largest CCPs like LCH.Clearnet, CME Clearing or Eurex Clearing. Similarly, India should then be able to withstand a credit event at its largest CCPs like CCIL or NSCCL.

Post crisis, regulators have expended much energy on resolution mechanisms to eliminate the “too big to fail” problem. I think resolution mechanisms need to draw upon lessons learnt from securitization and CDOs about how to make this work. I often say that the key purpose of resolution is not to ensure that firms do not die, but to ensure that when they do die, there are no stinking corpses. CDOs and securitization SPVs have shown how this can be done effectively – these methods have proven themselves on the ground and have stood the test of time. Instead of designing resolution mechanisms on a clean slate, regulators should take these proven methods and extend their scope and application to cover large swathes of the financial sector.

Posted at 19:05 on Sun, 06 Jan 2013     View/Post Comments (1)     permanent link


Tue, 01 Jan 2013

Contractual living wills and liquidation efficiency

Gary Gorton and Andrew Metrick published a fantastic paper last month on “Securitization” (NBER Working Paper 18611). This paper contains a wealth of information, a detailed survey of the literature and a number of very interesting theoretical ideas. What I found most interesting is the idea that the most important benefit of securitization could be a reduction in bankruptcy costs. In passing, Gorton and Metrick talk about “contractual living wills” a set of contractual arrangements in securitization that have some similarities to the living wills that are being proposed as mechanisms to enable easy resolution of banks in the post crisis regulatory reforms. I think this analogy is worth pursuing even further.

In a securitization, all the assets and liabilities are housed in a Special Purpose Vehicle (SPV) which is structured in such a way as to make bankruptcy all but impossible. Gorton and Metrick see this as a big part of the economic function of securitization:

... the SPV cannot become bankrupt. This was an innovation. That is, the design of SPVs to have this feature is an important part of the value of securitization. Moreover, it has economic substance. Since the cash flows are passive, there are no valuable control rights over corporate assets to be contested in a bankruptcy process. Thus, it is in all claimants’ interest to avoid a costly bankruptcy process. (Page 19)

If the assets perform badly and the cash flows from the assets are not sufficient to pay all the coupons, the SPV does not enter bankruptcy – instead the available funds are used to pay the senior claimants early while writing down the liabilities to the junior claimants. Gorton and Metrick call this a contractual living will (Page 8). But I think it is much more than the living wills that post crisis banks are being required to prepare for themselves. It is not just that the SPV waterfall rules are contractual and therefore self implementing unlike the wishful thinking that goes into the living wills of the banks. What is more important is that the SPV waterfall rules constitute a contractual bail-in arrangement whereby the junior claimants’ principal gets written down to restore the solvency of the SPV. Similarly liquidity problems are automatically addressed by extending maturities contractually. (It is not uncommon to see securitization structures in which the expected weighted average life of a securitization tranche is only 5 years, but its rated and legal final maturity is 30 years.)

Gorton and Metrick are right to point out that some of these things are easy to do because the cash flows of an SPV are passive and therefore there is no judgement required to manage them. The SPV is “brain dead” and is completely governed by contract. But I think that resolution of banks and other financial institutions can learn a lot from the SPV liquidation arrangements. Failed institutions can often be put in run-off mode where most of the management can be passive. Private ordering usually fares better than complex regulatory mechanisms.

It is also possible for a business segment to be put into SPV style liquidation arrangements (with near zero bankruptcy costs) while the rest of the institution runs normally. Many central counterparties (CCPs or clearing corporations) have framed rules under which if the losses in a particular segment exceeds a certain threshold, then loss allocation mechanisms kick in that would effectively shut down that segment – contractual bail-in eliminates bankruptcy. I think regulators should consider mandating such contractual provisions that make it impossible for a CCP to go bankrupt. CCPs should be allowed to fail, but the failure should not involve bankruptcy. Post crisis, many CCPs are beginning to clear very risky products that make it extremely likely that a large CCP in a G-7 country would fail in the next decade or so. Contractual living wills and contractual bail-ins would prevent such a failure from being a catastrophic event.

I think it is also possible to convert a failed bank into a CDO that is put into run-off mode with contractual provisions governing the loss allocations without any need for formal bankruptcy at all. Nearly seven years ago (well before the global crisis), I wrote in a blog post that “Having invented banks first, humanity found it necessary to invent CDOs because they are far more efficient and transparent ways of bundling and trading credit risk. Had we invented CDOs first, would we have ever found it necessary to invent banks?” Even if we do not want to replace all banks by CDOs, we can at least replace failed banks by CDOs that are “liquidation efficient” in Gorton and Metrick’s elegant phrase.

Posted at 20:51 on Tue, 01 Jan 2013     View/Post Comments (0)     permanent link