Prof. Jayanth R. Varma's Financial Markets Blog

Photograph About
Prof. Jayanth R. Varma's Financial Markets Blog, A Blog on Financial Markets and Their Regulation

© Prof. Jayanth R. Varma
jrvarma@iima.ac.in

Subscribe to a feed
RSS Feed
Atom Feed
RSS Feed (Comments)

Follow on:
twitter
Facebook
Wordpress

March
Sun Mon Tue Wed Thu Fri Sat
 
     
2010
Months
Mar
2009
Months

Powered by Blosxom

Fri, 19 Mar 2010

Indian Financial Stability and Development Council

I wrote a column in the Financial Express today about the proposal to create a Financial Stability and Development Council in India as a potential precursor to an apex regulatory body.

The announcement in the Budget speech this year about the setting up of a Financial Stability and Development Council (FSDC) has revived the long-standing debate about an apex regulatory body. Much of the debate on FSDC has focused on the politically important but economically trivial question of the chairmanship of the council. I care little about who heads FSDC—I care more about whether it has a permanent and independent secretariat. And I care far more about what the FSDC does.

The global financial crisis has highlighted weaknesses in the regulatory architecture around the world. Neither the unified regulator of the UK nor the highly fragmented regulators of the US came out with flying colours in dealing with the crisis. Everywhere, the crisis has brought to the fore the problems of regulatory overlap and underlap. In every country, there are areas where multiple regulators are fighting turf wars over one set of issues, while more pressing regulatory issues fall outside the mandate of any regulator. Regulation and supervision of systemically important financial conglomerates is an area seen as critical in the aftermath of the crisis. It is an area that has been highly problematic in India.

The most important failure (and bail-out) of a systemically important financial institution in India in recent times was the rescue of UTI, which did not completely fall under any regulator’s jurisdiction. The most systemically important financial institution in India today is probably the LIC, whose primary regulator has struggled to assert full regulatory jurisdiction over it. Even the remaining three or four systemically critical financial conglomerates in India are not subject to adequate consolidated financial supervision. The global crisis has shown that the concept of a lead regulator as a substitute for effective consolidated supervision is a cruel joke. The court examiner’s report in the Lehman bankruptcy released this month describes in detail how the ‘consolidated supervision’ by the US SEC of the non-broker-dealer activities of Lehman descended into a farce. Even before that we knew what happened when a thrift regulator supervised the derivative activities of AIG.

Consolidated supervision means a lot more than just taking a cursory look at the consolidated balance sheet of a financial conglomerate. An important lesson from the global crisis is that we must abandon the silly idea that effective supervision can be done without a good understanding of each of the key businesses of the conglomerate. High-level consolidated supervision of the top five or top ten financial conglomerates is, I think, the most important function that the FSDC should perform drawing on the resources of all the sectoral regulators as well as the staff of its own permanent secretariat.

Another important function is that of monitoring regulatory gaps and taking corrective action at an early stage. Unregulated or inadequately supervised segments of the financial sector are often the source of major problems. Globally, we have seen the important role played by under-regulated mortgage brokers in the sub-prime crisis.

In India, we have seen the same phenomenon in the case of cooperative banks, plantation companies and accounting/auditing deficiencies in the corporate sector. Cooperative banks were historically under-regulated because RBI believed that their primary regulator was the registrar of cooperative societies. The registrar, of course, did not bother about prudential regulation. Similarly, in the mid-1990s, plantation companies and other collective investment schemes were regulated neither as mutual funds nor as depository institutions. Only after thousands of investors had been defrauded was the regulatory jurisdiction clarified.

As far as accounting and auditing review is concerned, the regulatory vacuum has not been filled even after our experience with Satyam. Neither Sebi nor the registrar of companies undertakes the important task of reviewing published accounting statements for conformity with accounting standards. There is an urgent need for a body like FSDC that systematically identifies these regulatory gaps and develops legislative, administrative and technical solutions to these problems. By contrast, I believe that the role of ‘coordination’ between regulators emphasised in the current title of the high-level coordination committee is the least important role of an FSDC. Some degree of competition and even turf war between two regulators is a healthy regulatory dynamic.

At a crunch, I do not see anything wrong in a dispute between two regulators (or between one regulator and regulatees of another regulator) being resolved in the courts. After all, the Indian constitution gives the judiciary the power to resolve disputes even between two governments!

My favourite example from the US is the court battle between the SEC and the derivative exchanges (supported by their regulator, the CFTC) that led to the introduction of index futures in that country. A truly independent regulator should be able and willing to go to court against another arm of the government in order to perform its mission.

Posted at 10:19 on Fri, 19 Mar 2010     View/Post Comments (1)     permanent link


Thu, 18 Mar 2010

Lehman and its computer systems

Perhaps, I have a perverse interest in the computer systems of failed financial firms – I blogged about Madoff and his AS400 last year. Even while struggling to cope with the fantastic 2,200 page report of the court examiner on Lehman, I homed in on the discussion about Lehman’s computer systems:

At the time of its bankruptcy filing, Lehman maintained a patchwork of over 2,600 software systems and applications. ... Many of Lehman’s systems were arcane, outdated or non-standard. Becoming proficient enough to use the systems required training in some cases, study in others, and trial and error experimentation in others. ... Lehman’s systems were highly interdependent, but their relationships were difficult to decipher and not well documented. It took extraordinary effort to untangle these systems to obtain the necessary information.

My limited experience suggests that outdated and unusable software is a problem in most large organizations. I do hope that the ongoing consumerization of information technology will help reduce these problems by putting intense pressure on corporate IT to reform their ways. Perhaps, organizations should consider releasing the source code of most of their proprietary software on their own intranet to help manage the complexity and user unfriendliness of their systems. Consumerization plus crowd sourcing might just be able to tame the beast.

Posted at 12:50 on Thu, 18 Mar 2010     View/Post Comments (0)     permanent link


Thu, 11 Mar 2010

Law, Madoff, fairness and interest rates

I would grant that there is probably no fair way for the courts to deal with the mess created by the Madoff fraud. But I am intrigued by the discussions about fairness in the ruling of the US Bankruptcy Court about the rights of the Madoff victims.

I have nothing to say about the part of the judgement which interprets the law, and will confine myself to the fairness example that the judge discusses (page 32):

Investor 1 invested $10 million many years ago, withdrew $15 million in the final year of the collapse of Madoff’s Ponzi scheme, and his fictitious last account statement reflects a balance of $20 million. Investor 2 invested $15 million in the final year of the collapse of Madoff’s Ponzi scheme, in essence funding Investor 1’s withdrawal, and his fictitious last account statement reflects a $15 million deposit. Consider that the Trustee is able to recover $10 million in customer funds and that the Madoff scheme drew in 50 investors, whose fictitious last account statements reflected “balances” totaling $100 million but whose net investments totaled only $50 million.

The judge believes that Investor 1 has no net investment “because he already withdrew more than he deposited” while Investor 2 has a $15 million net investment. Since the recovery of $10 million is 20% of the $50 million net investment of all investors put together, Investor 2 is entitled to $3 million and Investor 1 is entitled to nothing.

The court states that Madoff apparently started his Ponzi scheme (“investment advisory services”) in the 1960s. Since the fraud was exposed at the end of 2008, the Ponzi scheme went on for maybe 40 years. Let us therefore take “many years ago” in the judge’s example to mean 20 years ago.

Between 1988 and 2008, the 3 month US Treasury Bill yield averaged a little over 4% so that the present value of Investor 1’s $10 million at the risk free interest rate would be about $22 million. After the withdrawal of $15 million, there would still be $7 million left – a little less than half of Investor 2’s $15 million. If you believe in the time value of money, Investor 1 should get a little less than half of Investor 2. The judge thinks Investor 1 should get nothing.

Alternatively, if you believe that the purchasing power of money is important, then the US consumer price inflation during those 20 years averaged about 3%. The $10 million that Investor 1 put in two decades ago would be worth $18 million in 2008 dollars and Investor 1 would still have $3 million of net investment left after the withdrawal of $15 million. Yet the judge thinks he should get nothing.

Posted at 15:08 on Thu, 11 Mar 2010     View/Post Comments (0)     permanent link


Tue, 09 Mar 2010

Report on Rating Agency Regulation in India

Last week, the Reserve Bank of India published the Report of the Committee on Comprehensive Regulation of Credit Rating Agencies appointed by the Government of India (more precisely, the High Level Coordination Committee on Financial Markets). This was also accompanied by a study by the National Institute for Securities Markets entitled An assessment of the long term performance of the Credit Rating Agencies in India

The report provides a comprehensive analysis of the issues mentioned in the terms of reference for the committee. Unfortunately, those terms of reference did not include what I believe are the only two questions worth looking at about credit rating in the aftermath of the global financial crisis:

Rating agencies are fond of saying that “AAA” is just the shortest editorial in the world. Regulators should take the rating agencies at their word and act accordingly. They should give as little regulatory sanction for these ratings as they do to the editorial in a newspaper. Also, regulators should make it as easy to start a rating agency as it is to start a newspaper. These are the two issues that I think need urgent consideration.

As I pointed out in this blog post last year, the US is an outlier in terms of the use of credit ratings in its regulations, and since India has largely adopted US style regulations, it too is an outlier. By unilateral action, India can eliminate all use of credit ratings except what is required by Basel-II. Even Basel-II is not something for which Indian regulators can disown responsibility – India is now a member of the Basel Committee. Indian regulators should be providing thought leadership on eliminating credit rating from Basel-III or Basel-IV.

I am disappointed that India’s apex regulatory forum (High Level Coordination Committee on Financial Markets) having recognized the important role of credit rating agencies in the global crisis, did not bother to ask the truly important questions. All the more so, because the report did a good job of addressing the questions that were referred to it in the terms of reference. If only the same bunch of competent people had been asked the right questions!

Posted at 18:05 on Tue, 09 Mar 2010     View/Post Comments (0)     permanent link


Sun, 07 Mar 2010

Bayesians in finance redux

In November last year, I wrote a brief post about Bayesians in finance. The post was brief because I thought that what I was saying was obvious. A long and inconclusive exchange with Naveen in the comments section of another post has convinced me that a much longer post is called for. The Bayesian approach is perhaps not as obvious as I assumed.

When finance professors walk into a classroom, they want to build on what the statistics professors have covered in their courses. When I am teaching portfolio theory, I do not want to spend half an hour explaining the meaning of covariance; I would like to assume that the statistics professor has already done that. That is how division of labour is supposed to work in a pin factory or in a university.

Unfortunately, there is a problem with this division of labour – most statistics professors teach classical statistics. That is true even of those statisticians who prefer Bayesian techniques in their research work! The result is that many finance students wrongly think that when the finance professors talk of expected returns, variances and betas, they are referring to the classical concepts grounded in relative frequencies. Worse still, some students think that the means and covariances used in finance are sample means and sample covariances and not the population means and covariances.

In business schools like mine where the case method dominates the pedagogy, these errors are probably less (or at least do less damage) because in the case context, the need for judgemental estimates for almost everything of interest becomes painfully obvious to the students. The certainties of classical statistics dissolve into utter confusion when confronted with messy “case facts”, and this is entirely a good thing.

But if cases are not used or used sparingly, and the statistics courses are predominantly classical, there is a very serious danger that finance students end up thinking of the probability concepts in finance in classical relative frequency terms.

Nothing could be farther from the truth. To see how differently finance theory looks at these things, it is instructive to go back to some of the key papers that established and developed modern portfolio theory over the years.

Here is how Markowitz begins his Nobel prize winning paper (“Portfolio Selection”, Journal of Finance, 1952) more than half a century ago:

The process of selecting a portfolio may be divided into two stages. The first stage starts with observation and experience and ends with beliefs about the future performances of available securities. The second stage starts with the relevant beliefs about future performances and ends with the choice of portfolio.

Many finance students would probably be astonished to read words like observation, experience, and beliefs instead of terms like historical data and maximum likelihood estimates. This was the paper that gave birth to modern portfolio theory and there is no doubt in Markowitz’ mind that the probability distributions (and the means, variances and covariances) are subjective beliefs and not classical relative frequencies.

Markowitz is also crystal clear that what matters is not the historical data but beliefs about the future – historical data is of interest only in so far as it helps form those beliefs about the future. He also seems to take it for granted that different people will have different beliefs. He is helping each individual solve his or her portfolio problem and is not bothered about how these choices affect the equilibrium prices in the market.

When William Sharpe developed the Capital Asset Pricing Model that won him the Nobel prize, he was trying to determine the market equilibrium and he had to assume that all investors have the same beliefs but did so with great reluctance:

... we assume homogeneity of investor expectations: investors are assumed to agree on the prospects of various investments – the expected values, standard deviations and correlation coefficients described in Part II. Needless to say, these are highly restrictive and undoubtedly unrealistic assumptions. However, ... it is far from clear that this formulation should be rejected – especially in view of the dearth of alternative models

But finance theory quickly went back to the idea that investors had different beliefs. Treynor and Black (“How to use security analysis to improve portfolio selection,” Journal of Business, 1973) interpreted the CAPM as saying that:

...in the absence of insight generating expectations different from the market consensus, the investor should hold a replica of the market portfolio.

Treynor and Black devised an elegant model of portfolio choice when investors had out of consensus beliefs.

The viewpoint in this paper is that of an individual investor who is attempting to trade profitably on the diiference between his expectations and those of a monolithic market so large in relation to his own trading that market prices are unaffected by it.

Similar ideas can be seen in the popular Black Litterman model (“Global Portfolio Optimization,” Financial Analysts Journal, September-October 1992). Black and Litterman started with the following postulates:

  1. We believe there are two distinct sources of information about future excess returns – investor views and market equilibrium.
  2. We assume that both sources of information are uncertain and are best expressed as probability distributions.
  3. We choose expected excess returns that are as consistent as possible with both sources of information.

Even if we stick to the market consensus, the CAPM beta itself has to be interpreted with care. The derivation of the CAPM makes it clear that the beta is actually the ratio of a covariance to a variance and both of these are parameters of the subjective probability distribution that defines the market consensus. Statisticians instantly recognize that the ratio of a covariance to a variance is identical to the formula for a regression coefficient and are tempted to reinterpret the beta as such.

This may be formally correct, but it is misleading because it suggests that the beta is defined in terms of a regression on past data. That is not the conceptual meaning of beta at all. Rosenberg and Guy explained the true meaning of beta very elegantly in their paper (“Prediction of beta from investment fundamentals”, Financial Analysts Journal, 1976) introducing what are now called fundamental betas:

It is instructive to reach a judgement about beta by carrying out an imaginary experiment as follows. One can imagine all the various events in the economy that may occur, and attempt to answer in each case the two questions: (l) What would be the security return as a result of that event? and (2) What would be the market return as a result of that event?

This approach is conceptually revealing but is not always practical (though if you are willing to spend enough money, you can access the fundamental betas computed by firms like Barra which Barr Rosenberg founded and later left). In practice, our subjective belief about the true beta of a company involves at least the following inputs:

Much of the above discussion is valid for estimating Fama-French betas and other multi-factor betas, for estimating the volatility (used for valuing options and for computing convexity effects), for estimating default correlations in credit risk models and many other contexts.

Good classical statisticians are quite smart and in a practical context would do many of the things discussed above when they have to actually estimate a financial parameter. In my experience, they usually agree that (a) there is a lot of randomness in historical returns; (b) the data generating process does not remain unchanged for too long; (c) therefore in practice there is not enough data to avoid sampling error; and (d) hence it is desirable to use a method in which sampling error is curtailed by fundamental judgement.

On the other side, Bayesians shamelessly use classical tools because Bayes theorem is an omnivore that can digest any piece of information whatever its source and put it to use to revise the prior probabilities. In practical terms, Bayesians and classical statisticians may end up doing very similar stuff.

The advantage of shifting to Bayesian statistics and subjective probabilities is primarily conceptual and theoretical. It would eliminate confusion in the minds of students on the ontological status of the fundamental constructs of finance theory.

I am now therefore debating in my own mind whether finance professors must spend some time in the class room discussing subjective probabilities.

How would it be like to begin the first course in finance with a case study of subjective probabilities – something like the delightful paper by Karl Borch (“The monster in Loch Ness”, Journal of Risk and Insurance, 1976)? Borch analyzes the probability that the Loch Ness monster exists (and would be captured within a one year period) given that a large company had to pay a rather high 0.25% premium to obtain a million pound insurance cover from Lloyd’s of London against that risk? This is obviously a question which a finance student cannot refuse to answer; yet there is no obvious way to interpret this probability in relative frequency terms.

Posted at 09:57 on Sun, 07 Mar 2010     View/Post Comments (5)     permanent link


Fri, 05 Mar 2010

Greek bond issue

That Greece could borrow money at all (even if it is at 3% above the risk free rate) seems to have calmed the markets a great deal. I am reminded of this piece of Rothschild wisdom:

You are certainly right that there is much to be earned from a government which has no money. But you have to take risks.

That is James Rothschild writing to Nathan Rothschild nearly two centuries ago as quoted by Niall Ferguson, The House of Rothschild: Money’s Prophets 1798-1848, Chapter 4.

Posted at 19:19 on Fri, 05 Mar 2010     View/Post Comments (0)     permanent link


Wed, 03 Mar 2010

Regulation by placebo

This is a very nice phrase that I picked up from SEC Commissioner Kathleen Casey’s speech dissenting from the short selling rules that the SEC introduced recently:

But this is regulation by placebo; we are hopeful that the pill we’ve just had the patient take, although lacking potency, will convince him that everything is all right.

Casey’s speech itself was a bit of political grandstanding and was in the context of an SEC vote that went on predictable party lines. I am not therefore inclined to take the speech too seriously. But the phrase “regulation by placebo” very elegantly captures a phenomenon that is all too common in financial sector regulation all over the world.

Securities regulators, banking regulators and other financial regulators have this great urge to be seen to be doing something regardless of whether that something is the right thing or not. The result is often a half hearted measure that does not stop the wrong doing but convinces the public that the evil doers have been kept at bay.

Regular readers of my blog know that I am against short sale restrictions in general. At the very least, I would like short sale restrictions to be accompanied by corresponding and equally severe restrictions on leveraged longs. If you are not allowed to short a stock when it has dropped 10%, then you should not be allowed to buy a stock (with borrowed money) when the stock has risen 10%. Market manipulation is done far more often by longs than by shorts!

Posted at 18:58 on Wed, 03 Mar 2010     View/Post Comments (3)     permanent link