Prof. Jayanth R. Varma's Financial Markets Blog

Photograph About
Prof. Jayanth R. Varma's Financial Markets Blog, A Blog on Financial Markets and Their Regulation

© Prof. Jayanth R. Varma
jrvarma@iima.ac.in

Subscribe to a feed
RSS Feed
Atom Feed
RSS Feed (Comments)

Follow on:
twitter
Facebook
Wordpress

June
Sun Mon Tue Wed Thu Fri Sat
     
   
2011
Months
Jun
2010
Months

Powered by Blosxom

Tue, 28 Jun 2011

Banking index option spreads during the crisis

Kelly, Lustig and Nieuwerburgh have written an NBER Working Paper (Bryan T. Kelly, Hanno Lustig, Stijn Van Nieuwerburgh, “Too-Systemic-To-Fail: What Option Markets Imply About Sector-wide Government Guarantees”, NBER Working Paper No. 17149, June 2011) explaining banking index option spreads during the global financial crisis in terms of the effect of sector-wide government guarantees:

Investors in option markets price in a collective government bailout guarantee in the financial sector, which puts a floor on the equity value of the financial sector as a whole, but not on the value of the individual firms. The guarantee makes put options on the financial sector index cheap relative to put options on its member banks. The basket-index put spread rises fourfold from 0.8 cents per dollar insured before the financial crisis to 3.8 cents during the crisis for deep out-of-the-money options. The spread peaks at 12.5 cents per dollar, or 70% of the value of the index put. The rise in the put spread cannot be attributed to an increase in idiosyncratic risk because the correlation of stock returns increased during the crisis.

I am not convinced about this because the “No more Lehmans” policy implied a guarantee on individual firms and not merely on the sector as a whole. I propose an alternative explanation for the counter intuitive movement of the index spread based on the idea that the market knew the approximate scale of subprime losses but did not know which banks would take those losses. What securitization had done was to spread the risk across the whole world and nobody knew where the risk had ultimately come to rest. However, the total amount of the toxic securities could be estimated and the ABX index provided a market price for what the average losses would be on these securities. In the macabre language that was popular then, the market knew how many murders had taken place, but did not know where the bodies were buried. The interesting implication of this model is that when a “body” (large loss) turns up in one place (bank X), that immediately reduces the chance that a “body” would turn up elsewhere (bank Y) because there were only a fixed number of “bodies” to discover. The fact that bank X has a huge loss reduces the losses that other banks are likely to suffer because the total scale of losses is known.

A simple numerical example using the Black Scholes model would illustrate the application of this idea to the basket-index put spread. I consider a banking sector with only two stocks A and B each of which is trading at 100. Assuming equal number of shares outstanding, the index is also 100. Consider a put option with a strike of 85 with a volatility of 20% and for simplicity an interest rate of 0 (we are in a ZIRP world!). The put option on each of the two stocks is priced at 2.16 by the Black Scholes formula. Since the two stocks are identical the price of a basket of options (half an option each on each of the two stocks) is also 2.16. To value the index put at the same strike, assume that the correlation between the two stocks is 0.50. The standard formula for the variance of a sum implies an index volatility of 17.32% and using a lognormal approximation and the Black Scholes model, the index option is priced at 1.49. The basket-index put spread is 2.16 - 1.49 = 0.67.

Consider now the crisis situation and assume that the correlation rises to 0.60 but nothing else changes. The stock option prices are unchanged, but the higher correlation raises the index volatility to 17.89% and the index put is now worth 1.63. The basket-index put spread declines to 0.53. During the crisis the actual data shows that the spread rose instead of declining as correlations rose. This is the puzzle that Kelly et al are trying to solve.

I now solve the same puzzle using the “where are the bodies buried” model. In this framework, the simple Black Scholes diffusion is supplemented by a jump risk representing the risk that a “body” would be discovered in one of the banks. Assume for simplicity that there is only “body” to be discovered and that the discovery of that “body” would reduce the value of the affected stock by 25%. As far as the index is concerned, there is no uncertainty at all. One of the stocks goes to 75 and the other remains at 100 (though we do not know which stock would be at which price) and so the index drops to 0.50 x 75 + 0.50 x 100 = 87.50. Assuming the same correlation (0.6) and volatility as before the index put option price rises from 1.63 to 4.98 because the put option is now much closer to the money.

As far as either of the two stocks is concerned, the position is more complicated. There is a 50% chance that a “body” turns up at that bank in which case the stock would trade at 75; there is also a 50% chance that there is no “body” in that bank in which case its stock should trade at 100. Let us make the reasonable assumption that the 50% objective probability is also the risk neutral probability. Before we know where the “body” is buried, the stock price of either bank would be 0.50 x 75 + 0.50 x 100 = 87.50. Note the interesting negative dependence in the tail, if a “body” is discovered in one bank, its price would fall from 87.50 to 75, but the price of the other bank would rise from 87.50 to 100.00 because it is now clear that there is no “body” there.

Option valuation in this situation can no longer use Black Scholes because of the jump risk. Adapting the basic idea of the Merton jump model, we can value this put as follows. If the stock price jumps to 100, the Black Scholes put option price would be 2.16 as computed earlier. But if the price jumps to 75, the Black Scholes put price rises dramatically to 12.58 (the put is now actually in the money). Since the risk neutral probabilities of these two events are 50%, the value of the stock option (before we know where the “body” is buried) is 0.50 x 2.61 + 0.50 x 12.58 = 7.37. The basket-index put spread is now 7.37 - 4.98 = 2.39.

The “where are the bodies buried” model produces a rise in the basket-index put spread from 0.67 to 2.39 without any government guarantees at all. At the same time, the basket-index call option spread shows very little change – this is what Kelly et al found in the actual data as well.

We can elaborate and complicate this basic model in many ways. Of course, there can be more than two banks, they may be of different sizes, there may be more than one “body” to be discovered, the number of “bodies” may be uncertain, the effect of a “body” on the stock price may also be uncertain (random jump size). None of this would change the essential feature of the model – a negative tail dependence between the various bank stock prices.

The key purpose of the model is to demonstrate the pitfalls of using correlation to measure dependence relationships when it comes to tail risk. The dependence in the middle of the distribution (the diffusion process) can be large, positive and rising while the dependence in the left tail is becoming sharply negative. This is the phenomenon that Kelly et al seem to be ignoring completely.

Posted at 19:32 on Tue, 28 Jun 2011     View/Post Comments (5)     permanent link


Fri, 24 Jun 2011

Sending internet banking passwords by mail

I have observed banks in India use several different ways to send internet banking passwords to their customers, but from a security point of view all these methods are totally unsatisfactory:

Many people think that these security risks are trivial and unavoidable. Subconsciously, they think that the bank must anyway store the password somewhere to verify the password that the user types in. But this is wrong. Computers never store user passwords at all – at least they are not supposed to do so. What is stored is a secure cryptographic hash of the password from which the password cannot be recovered with any reasonable amount of computational effort. When a user tries to log in, what happens is that the computer applies the same secure cryptographic hash to the password that the user typed in. If this hash matches the stored password hash, the computer accepts the password as correct and carefully erases (from its own memory) the password that it just read in from the user. Good software programmers are so paranoid about this that before they read the password that a user is typing in, they take care to lock the memory location into RAM (for example, by using mlock in unix) so that during the few milliseconds that the plain text password exists in the computer’s memory, this password is not accidentally written to the hard disk when the operating system manages its virtual memory.

Looking at things with this background, it appears to me that any system in which a password exists in plain text printed form even for a few minutes (let alone several days) is an unacceptable and intolerable level of security risk.

There is also a very simple solution to the problem. The most secure way of sending a password to the customer is not to send the password at all! This requires that the bank should not generate the password in the first place. If the user generates the password, then there is no need to send the password to him at all. This thought occurred to me when I was examining the process of applying for a PAN number online (A similar process is used for online filing of income tax returns also.). This process addresses the same problem that the bank faces – a PAN number cannot be allotted without receiving signed documents in physical form:

  1. The applicant fills the form online and submits the form.
  2. The system displays an acknowledgement which contains a unique 15-digit acknowledgement number.
  3. The applicant prints the acknowledgement, affixes the photograph, signs it, attaches relevant documents and mails it to the PAN Service Unit.
  4. At the PAN Service Unit, the 15-digit acknowledgement number provides the link between the physical records and the online application to enable processing of the application.

This process can be adapted to the internet banking password problem as follows. The customer applies for internet banking online and chooses a password. As usual, the system stores a a secure cryptographic hash of the password but does not enable the online banking facility at this stage. The system generates an acknowledgement number and lets the customer print out an application form which includes this acknowledgement number. The customer mails this form duly signed to the bank. After the bank verifies the signature and other documents, it simply enables the password that the user has already generated. At all times, this password is known only to the user; neither does the bank records this password on paper nor does it store the password electronically in plain text.

Posted at 08:53 on Fri, 24 Jun 2011     View/Post Comments (12)     permanent link


Tue, 14 Jun 2011

Cryptic RBI announcement on banknote numbering

The Reserve Bank of India issued a cryptic press release yesterday saying:

With a view to enhancing operational efficiency and cost effectiveness in banknote printing at banknote presses, it has been decided to issue, to begin with, fresh banknotes of Rs 500 denomination in packets, which may not necessarily all be sequentially numbered. This is consistent with international best practices. Packets of Banknotes in non-sequential number will, as usual, have 100 notes. The bands of the packets containing the banknotes in non-sequential number will clearly be superscribed with the legend, “The packet contains 100 notes not numbered sequentially.”

The confusion comes from the three phrases “enhancing operational efficiency and cost effectiveness in banknote printing”, “to begin with”, and “international best practices” each of which gives a different idea of what this is all about. My very limited understanding of the subject is that there are three reasons for non sequential numbering of currency notes:

  1. The most important and best known is the checksum or security reason seen principally in euro banknotes. The euro banknote contains a checksum and therefore every packet of freshly printed notes is non sequentially numbered – ignoring the factors below, consecutive notes in a packet are nine numbers apart: Z10708476264 would be followed by Z10708476273. This would truly be consistent with “international best practices”, but this can be ruled out because the press release clearly says that only some packets will have non sequential numbers.
  2. The second is the replacement note reason which arises when there are defects while printing a sheet of notes. The defective note is removed and is replaced with a replacement note which usually has a different number in a totally separate replacement note series (for example, star series in India). This is ruled out because it would not be consistent with the phrase “to begin with”. Star series notes were introduced in India five years ago. The annual policy statement for 2006-07 stated:

    Currently, all fresh banknote packets issued by the Reserve Bank contain one hundred serially numbered banknotes. In a serially numbered packet, banknotes with any defect detected at the printing stage are replaced at the presses by banknotes carrying the same number in order to maintain the sequence. As part of the Reserve Bank’s ongoing efforts to benchmark its procedures against international best practices, as also for greater efficiency and cost effectiveness, it is proposed to adopt the STAR series numbering system for replacement of defectively printed banknotes. A ‘star series’ banknote will have an additional character, viz., a star symbol * in the number panel and will be similar in every other respect to a normal bank note and would be legal tender. Any new note packet carrying a star series note will have a band on which it will be indicated that the packet contains a star note(s). The packet will contain one hundred notes, though not in serial order. To begin with, star series notes would be issued in lower denominations, i.e., Rs.10, Rs.20 and Rs.50 in the Mahatma Gandhi series. Wide publicity through issue of press advertisements is being undertaken and banks are urged to keep their branches well informed so as to guide their customers.

  3. The third reason that I am aware of is the column sort. This too arises from defective sheets. The defective sheets are first cut into columns and the “good” columns are cut into notes and packed into bundles which will not be sequentially numbered because of the missing “bad” columns. It does enhance “operational efficiency and cost effectiveness in banknote printing”. It is of course internationally common simply because DeLaRue uses it, and they print notes for many countries around the world. But in light of the developments last year, DeLaRue is not exactly a paragon of “international best practices”.

So what exactly does the RBI mean in its cryptic press release? I fail to see the need for “constructive ambiguity” when it comes to the numbering of banknotes. Any comments that would clarify my understanding of this would be welcome.

Posted at 10:46 on Tue, 14 Jun 2011     View/Post Comments (2)     permanent link


Mon, 13 Jun 2011

Levin-Coburn Report and Goldman Risk Management

The Levin-Coburn report (prepared by the staff of the US Senate Permanent Subcommittee on Investigations) came out while I was on vacation and I finished reading it (nearly 650 pages) only now. In the meantime, the findings of the report have been discussed and analyzed extensively in the press and in the blogs. I will therefore focus on what the report tells us about risk management in a large well run investment bank.

Even as the crisis unfolded, we knew that Goldman was among the few firms that sold and hedged their mortgage portfolio and limited their losses. The Levin-Coburn report gives us a ringside view of how this process actually works. Of course, it is ugly, but it is also fascinating. Three examples stand out:

In short, implementing a risk mitigation strategy was extremely hard even though (a) Goldman had the right view on the market, and (b) it was willing to place its self interest far above that of its “customers” in executing its desired trades.

Finally, anybody who thinks that investment banks like Goldman would give them a fair deal should read the gory details of how Goldman dumped toxic securities (Hudson, Anderson and Timberwolf) on investors around the world to protect/further its own interests. There have been many press reports about these shady deals, but the wealth of detail in the report (page 517-560) is much more than what I have seen elsewhere. The Abacus deal which led to the record $550 million settlement with the SEC appears much less sinister in comparison.

Posted at 12:10 on Mon, 13 Jun 2011     View/Post Comments (1)     permanent link


Sat, 04 Jun 2011

RBI Report on Financial Holding Companies

I participated in a panel discussion on CNBC-TV18 about the recent report of an RBI Working Group on Financial Holding Companies (FHCs). The transcript and video are available at the CNBC-TV18 web site. I made four points:

  1. The global financial crisis has shown that we need a funeral plan for our largest financial conglomerates. The FHC model makes it easier to deal with the failure of a part of a conglomerate. The failing subsidiary can be wound up leaving the rest of the conglomerate intact. In the current model, the failing business unit may own other healthy business and they all go down if the parent unit is resolved.
  2. It is natural that a report by the Reserve Bank would recommend that the FHC should be regulated by the RBI in line with the central bank’s mandate regarding financial stability. I am sure there will be a lot of debate on that. It is perfectly possible that this could move up to the financial stability development council or something like that. The competencies required to regulate FHCs probably does not exist in the Indian regulatory space today, and if we are going to build those capabilities, then it it probably make sense to create them in a regulatory collegium like the FSDC.
  3. A big advantage of the FHC model does is that the FHC does not have any operations on its own – it does nothing other than own share in each of the operating subsidiaries. Each operating subsidiary can be independently regulated by its own sectoral regulator. The only thing that you need to do at the FHC is consolidated prudential supervision of the conglomerate. That is a lot easier to do than consolidated supervision of an entity, which is also an operating financial company.
  4. Businesses houses that set up banks should be subject to the FHC regime. If a manufacturing company chooses to own a bank or a systemically important insurance company or asset manager then the financial regulators have interest in the solvency and in the governance of the parent manufacturing company itself. The regulation of the corporate holding company will essentially be in terms of how much leverage it can have and in terms of what are the minimum governance standards. If a manufacturing company does not like that then it should not get into the financial sector.

Posted at 12:06 on Sat, 04 Jun 2011     View/Post Comments (0)     permanent link


Wed, 01 Jun 2011

When is an algorithm not an algorithm?

An algorithmic description becomes a mere description and not an algorithm when you ask the US SEC and CFTC to interpret the term. Section 719(b) of the Dodd-Frank Act mandated a study on algorithmic description of derivative contracts in the following terms:

The Securities and Exchange Commission and the Commodity Futures Trading Commission shall conduct a joint study of the feasibility of requiring the derivatives industry to adopt standardized computer-readable algorithmic descriptions which may be used to describe complex and standardized financial derivatives.

The algorithmic descriptions defined in the study shall be designed to facilitate computerized analysis of individual derivative contracts and to calculate net exposures to complex derivatives. The algorithmic descriptions shall be optimized for simultaneous use by— (A) commercial users and traders of derivatives; (B) derivative clearing houses, exchanges and electronic trading platforms; (C) trade repositories and regulator investigations of market activities; and (D) systemic risk regulators.

When the SEC and CFTC published their joint study in April, they redefined the mandate of the study completely as follows:

Section 719(b) of the Dodd-Frank Act requires that the Commissions consider “algorithmic descriptions” of derivatives for the purposes of calculating “net exposures.” An algorithm is a step-by-step procedure for solving a problem, especially by a computer, which frequently involves repetition of an operation. Algorithmic descriptions, therefore, would refer to a computer representation of derivatives contracts that is precise and standardized, allowing for calculations of net exposures. While it is conceivable to represent derivatives as algorithms – by reflecting the steps necessary to calculate net exposures and other analysis as computer code – such an approach would be very difficult given the divergence of assumptions and complex modeling needed to calculate net exposures. Accordingly, the staff have interpreted “algorithmic descriptions” to mean the representation of the material terms of derivatives in a computer language that is capable of being interpreted by a computer program.

This is truly astounding. The Commissions clearly understood that they were flagrantly violating the express provisions of the law. They are brazenly telling the lawmakers that they will do not what the law asks them to do, but what they find it convenient to do. If only the entities that the SEC regulates could do the same thing! Imagine the SEC telling companies that they need shareholder approval for certain matters, and the companies brazenly saying that since calling shareholder meetings is very difficult, we will “interpret” shareholder approval to mean board approval. What the two commissions have done is no less absurd than this.

The Dodd-Frank Act explicitly states “The study shall be limited to ... derivative contract descriptions and will not contemplate disclosure of proprietary valuation models.” This is important because for many complex derivatives, there are no good valuation algorithms. Dodd-Frank is not bothered about valuation, it is talking about the payoffs of the derivatives. I do not understand what is so difficult about describing derivative payoffs algorithmically. The Church Turing thesis states (loosely speaking) that everything that is at all computable is computable using a computer algorithm (Turing machine). Since derivative payoffs are clearly computable, algorithmic descriptions are clearly possible.

A year ago, I blogged about an SEC proposal to require algorithmic description for complex asset backed securities:

We are proposing to require that most ABS issuers file a computer program that gives effect to the flow of funds, or “waterfall,” provisions of the transaction. We are proposing that the computer program be filed on EDGAR in the form of downloadable source code in Python. ... (page 205)

Under the proposed requirement, the filed source code, when downloaded and run by an investor, must provide the user with the ability to programmatically input the user’s own assumptions regarding the future performance and cash flows from the pool assets, including but not limited to assumptions about future interest rates, default rates, prepayment speeds, loss-given-default rates, and any other necessary assumptions ... (page 210)

The waterfall computer program must also allow the use of the proposed asset-level data file that will be filed at the time of the offering and on a periodic basis thereafter. (page 211)

The joint study does not reference this proposal at all. Nor does it give any clear rationale for dropping the algorithmic requirement. Interestingly, last week, the Economist described a typo in a prospectus that could cost $45 million:

On February 11th Goldman issued four warrants tied to Japan’s Nikkei index which were described in three separate filings amounting to several hundred pages. Buried in the instructions to determine the settlement price was a formula that read “(Closing Level – Strike Level) x Index Currency Amount x Exchange Rate”. It is Goldman’s contention that rather than multiplying the currency amount by the exchange rate, it should have divided by the exchange rate. Oops.

It is exactly to prevent situations like this that algorithmic descriptions are needed. By running a test suite on each such description, errors can be spotted before the documentation is finalized. Clearly, the financial services industry does not like this kind of transparency and the regulators are so completely captured by the industry that they will openly flout the law to protect the regulatees.

Posted at 21:31 on Wed, 01 Jun 2011     View/Post Comments (5)     permanent link