Prof. Jayanth R. Varma's Financial Markets Blog

Photograph About
Prof. Jayanth R. Varma's Financial Markets Blog, A Blog on Financial Markets and Their Regulation

© Prof. Jayanth R. Varma

Subscribe to a feed
RSS Feed
Atom Feed
RSS Feed (Comments)

Follow on:

Sun Mon Tue Wed Thu Fri Sat
27 28 29
30 31          
JulAug Sep
Oct Nov Dec

Powered by Blosxom

Wed, 19 Jul 2017

Why Aadhaar transaction authentication is like signing a blank paper

Using Aadhaar (India’s biometric authentication system) to verify a person’s identity is relatively secure, but using it to authenticate a transaction is extremely problematic. Every other form of authentication is bound to a specific transaction: I sign a document, I put my thumb impression to a document, I digitally sign a document (or message as the cryptographers prefer to call it). In Aadhaar, I put my thumb (or other finger) on a finger print reading device, and not on the document that I am authenticating. How can anybody establish what I intended to authenticate, and what the service provider intended me to authenticate? Aadhaar authentication ignores the fundamental tenet of authentication that a transaction authentication must be inseparably bound to the document or transaction that it is authenticating. Therefore using Aadhaar to authenticate a transaction is like signing a blank sheet of paper on which the other party can write whatever it wants.

All this was brought home to me when I bought a new SIM card recently and was asked to authenticate myself with a finger print. The employee of the telecom company told me that there was a problem and I needed to try again. Being a little suspicious, I stretched forward and twisted my neck to peep at the computer screen in front of the employee (this screen would otherwise not have been visible to me). My suspicion was allayed on seeing an error message on the screen and I tried again only to get the same error message. After three attempts, the employee suggested that I come again the next day. Back home, I saw three emails from UIDAI (Unique Identification Authority of India) stating “Your Aadhaar number ___ was used successfully to carry out e-KYC Authentication using ‘Fingerprint’ on ___ at ___ Hrs at a device deployed by ___.” Note the word successfully.

That is when I realized that the error message that I saw on the employee’s screen was not coming from the Aadhaar system, but from the telecom company’s software. That is a huge problem. This conclusion was corroborated the next day when after one more error message, I found that the employee had left one field in the form partially filled and the error message disappeared when that was corrected.

Let us think about why this is a HUGE problem. Very few people would bother to go through the bodily contortion required to read a screen whose back is turned towards them. An unscrupulous employee could simply get me to authenticate the finger print once again though there was no error and use the second authentication to allot a second SIM card in my name. He could then give me the first SIM card and had over the second SIM to a terrorist. When that terrorist is finally caught, the SIM that he was using would be traced back to me and my life would be utterly and completely ruined.

Actually, even my precaution of trying to read the employee’s screen is completely pointless. The screen is not an inseparable part of the finger print reader. On the contrary. the fingerprint reader is attached by a flimsy cable to a computer (which is out of view) and the screen is purportedly attached to the same computer. It is very easy to attach the fingerprint reader to one computer (from which a malicious transaction is carried out) and attach the screen on the counter to another computer which displays the information that I expect to see.

Another way of looking at the same thing is that a rogue employee of the telecom company could effortlessly execute what is known in computer security as an MitM (Man in the Middle) attack on the communication between me and the Aadhaar system. In fact, I see some analogies between the problem that I am discussing and the MitM attack described by Nethanel Gelerntor, Senia Kalma, Bar Magnezi, and Hen Porcilan in their recent paper (h/t Bruce Schneier). Neither I nor the Aadhaar system has any way of detecting or foiling this MitM attack.

I think the whole model is fundamentally broken, and Aadhaar should be used only to verify identities, and not to authenticate transactions. Transaction authentication must happen with a thumb impression, a physical signature, a digital signature or something similar that is inseparably bound to a document.

Posted at 21:36 on Wed, 19 Jul 2017     View/Post Comments (0)     permanent link

Sat, 15 Jul 2017

Secret deals between exchanges and traders: securities fraud implications

Dolgopolov has a nice paper on the conditions under which secret arrangements between exchanges and high frequency traders might or might not constitute securities fraud. Modern exchanges use complex order types and intricate order hiding and matching rules, and they could claim that any bugs or flaws in their trading protocols are honest implementation mistakes. Smart traders who exploit these trading imperfections and frictions could simply claim to be skillful beneficiaries who discovered the bugs by their own effort. In many cases, there appears to be collusion between the exchange and the HFT firms (the exchanges often disclose undocumented features and bugs privately to their best customers in return for getting more business from these firms), but this is not easy to prove. Dolgopolov proposes legal theories under which securities fraud liability could be imposed on the HFT firms themselves.

For over a decade now, I have been arguing for a different solution: regulators should mandate that critical exchange software be open source (here, here, here and here). At the risk of sounding like a broken record, I would like to reiterate my view that “regulators and self regulatory organizations have not yet understood the full power of the open source methodology in furthering the key regulatory goals of market integrity.”

Posted at 12:58 on Sat, 15 Jul 2017     View/Post Comments (0)     permanent link

Sun, 09 Jul 2017

Global Capital Flows: VIX versus US Fed

Historically, the VIX (the volatility of the US stock market implied by option prices) has been an important barometer of global risk aversion that has a strong influence on global capital flows. A BIS Working Paper published last month (Avdjiev, Gambacorta, Goldberg and Schiaffi, “The Shifting Drivers of Global Liquidity”) demonstrate that this changed in the aftermath of the Global Financial Crisis with US monetary policy becoming the dominant driver of capital flows while the VIX declined in importance. They also point out that this phenomenon peaked in 2013 and there has been a partial return to pre-crisis patterns since then.

The results make intuitive sense: as global central banks pursued unconventional monetary policy, a large amount of duration risk ended up on the ever expanding balance sheets of these central banks. They thus became the marginal risk taker in the economy. (The authors use the Wu-Xia shadow rate as their measure of US monetary policy to take account of the impact of unconventional monetary policy). Since 2013, the central banks have been in tapering mode and they are no longer the marginal risk taker in the economy.

Though the authors do not venture down this path, I think their results explain well why the 2013 taper talk had such a drastic impact on emerging markets while the coordinated tightening by global central banks during the last year has had such a muted impact. The marginal risk taker is now the private investor and the low level of VIX currently indicates that the marginal risk taker is in “risk on” mode. This suggests that we should be looking at the VIX rather than at global monetary policy for the early warning signs of the next wave of turbulence in emerging markets.

Posted at 13:51 on Sun, 09 Jul 2017     View/Post Comments (0)     permanent link

Sat, 08 Jul 2017

Electronic banking liability allocation

A couple of days back, the Reserve Bank of India (RBI) issued new guidelines regarding who bears the loss from online banking frauds. The effect is to limit the liability of the customer and thereby transfer the loss to the banks. This measure has been seen as a customer friendly one. Basic economics teaches us to be careful about coming to such a conclusion. In equilibrium, banks would probably recover all expenses incurred by them from their customers. In fact, today, bank customers in India are probably paying higher fees as banks try to recover their bad loan losses from their customers. Unless banking becomes more competitive, the effect of the RBI regulation would more likely be a transfer from one group of customers (those who do not use online banking or have not been defrauded) to those who have lost money.

I think that the RBI regulation is a very good move for a very different reason: incentive compatibility. The important thing is that the regulation places losses on the party that can do something to reduce frauds. A customer cannot improve the bank’s computer security, she cannot ensure that the bank patches all its software, follows a good password policy, and so on. Only the bank can do all this. Unfortunately, computer security does not receive adequate attention from the top management of banks in India. If the new policy helps concentrate the minds of top management, that would be a good thing. If that does not happen, maybe the bank will wake up when the losses materialize. That is the true benefit of the new regulation – it has the potential to reduce online frauds.

Posted at 21:54 on Sat, 08 Jul 2017     View/Post Comments (0)     permanent link