One justification for the very broad powers of the regulators in the Dodd-Frank Act Title II’s Orderly Liquidation Authority mechanisms is that it would allow the FDIC to act very quickly. This usually means “act very quickly once the process is initiated,” but one key aspect of insolvency resolution is that it also has to be initiated quickly, so as to limit shareholder and manager moral hazard to make things even worse.

Yet, the initiation of insolvency procedures in Dodd-Frank follows from a so-called “three key turning” mechanism. The first key is that the Treasury secretary has to suggest that the firm is “in default or in danger of default,”‘ and have consulted with the President (possibly the fourth key). The second and third key are that two-thirds of the Federal Reserve board, and two-thirds of either the Security and Exchange Commission board for investment banks, two-thirds of the Federal Insurance Office board for insurance companies, or two-thirds of the Federal Deposit Insurance Corporation board must have recommended the initiation of resolution procedures. As should be apparent, this process requires the coordination of multiple agencies, and multiple board members, and is unlikely to be triggered rapidly, if only because of coordination considerations.

Moreover, because initiating insolvency resolution is an admission of failure of prior control, and because the costs of delaying the initiation are essentially shifted onto the FDIC that manages the resolution process, there’s incentives not to recognize the insolvency or to keep hush about it. This was a huge problem for example during the 1980’s Savings & Loans crisis, where insolvent thrifts remained opened for an average period of 17 months before resolutions were initiated, and in a few cases for as much as ten years. The 1993 FDIC Improvement Act sought to fix this problem, by giving the FDIC power to initiate procedures itself, rather than having to rely on the bank’s primary regulator, and by allowing the FDIC to act before the bank is effectively insolvent through what is called “prompt corrective action.” This has, however, obviously not been a success, as in most cases of bank failures during 2008-2009 there were no prompt corrective actions taken, and procedures were initiated after the bank’s equity had dropped in negative territory. This means that, even without rules that require the coordination of multiple agencies with possibly misalligned incentives, regulators’ incentives and knowledge problem generally pushes the initiation of insolvency procedures back.

We actually do have something close enough to a benchmark to see how hard it might be for all “3 keys” to agree and coordinate. The Systemic Risk Exemption of the 1993 FDIC Improvement Act has a similar mechanism, where it required a two-third vote of the FDIC’s board, a two-third vote of the Fed’s Board of Governors, and the Treasury Secretary who has to consult with the President. Triggering this exemption allows the FDIC to bypass the “least cost resolution” provisions of the FDICIA, and allows it to be more generous with its “insurance” fund, providing larger coverage to “uninsured” depositors than is usually the case. It’s essentially an institutionalized bailout procedure.

Despite the fact that it would have allowed the FDIC a lot more flexibility, the Systemic Risk Exemption was triggered only 3 times in nearly 20 years. All 3 cases are recent, as those are Citigroup, Bank of America and Wachovia. In the case of Wachovia, those powers were ultimately not used, as Wells Fargo purchased it instead. This is suggestive that those powers are hard to invoke and use, and might suggest something about triggering the Orderly Liquidation Authority. On the other hand, it is true that the Systemic Risk Exemption could have been triggered much more often under a director that is more bailout-happy than Sheila Bair was (say, Geithner for example).

Given those features of the Orderly Resolution Authority, there’s a chance that initiation might be delayed. Delayed initiation means larger losses, more adverse market reactions, and stronger temptations to bailout, with accompanying calls for further regulation.

Why am I telling you all this? Well, ZeroHedge posted a very revealing figure, detailing the new European Bank Resolution directives and what could be called its “8 key turning mechanism”…

Advertisements

Yesterday I had the honor of participating in the Charles Street Symposium organized by the Legatum Institute in London, under the topic of “What Would Hayek Say Today (Really)?” . The essay I presented is titled “A Hayekian Critique of the New Financial Institutions Insolvency Policies.” Also check out the other essays, they are all exceptional. My coups de coeur are those of Zachary Caceres and Wolf von Laer.

Thus, the right to terminate or close-out financial market contracts is important to the stability of financial market participants in the event of an insolvency and reduces the likelihood that a single insolvency will trigger other insolvencies due to the nondefaulting counterparties’ inability to control their market risk. The right to terminate or close-out protects federally supervised financial institutions, such as insured banks, on an individual basis, and by protecting both supervised and unsupervised market participants, protects the markets from systemic problems of “domino failures.”

Source: Ireland, Oliver. 1999. “Testimony of Oliver Ireland, Associate General Counsel, Board of Governors of the Federal Reserve System, on the proposed Bankruptcy Reform Act of 1999.” Subcommittee on Commercial and Administrative Law, Committee on the Judiciary. U.S. House of Representatives, March 18.

Qualified financial contracts privileges to avoid bankruptcy stay, greatly expanded by a 2005 amendment to bankruptcy laws, were one of the principal source of so-called “disorderly” liquidation during in Fall of 2008, and the main motivation behind most of the 2008 bailouts. It became a primary source of “systemic risk.” See Roe, Mark J. 2011. “Derivatives Market’s Payment Priorities as Financial Crisis Accelerator.” Stanford Law Review 63 (3): 539-590.

File in “systemic risk exaggerations.”

Source: Cihak, Martin, and Erlend Nier. 2009. The Need for Special Resolution Regimes for Financial Institutions—The Case of the European Union. IMF Working paper. September.

My research is on financial stability, but I like to dabble on the economics of fictional stories every once in a while. This is too good not to share.

“Nuclear power and financial systems both have the capacity to blow up the world.”

John Kay’s column in the Financial Times, or up on his blog. I think this might very well be my new favorite systemic risk exaggeration.

I think discussions on plurality of emission deserve a place back in money and banking classes, especially with regards to modern monetary challenges. In this video, Larry White gives a quick introduction to some of the features of free banking systems.

First, let me tell you that I do think there are much more important subjects to be using your time and energy on than reflecting on hypotheticals in the fantasy world of horror flick monsters.  Still, we’re allowed to have a little fun every once in a while, especially on Halloween, right?

With that in mind, those that know me well know that I both REALLY like economics, and I REALLY like zombies, among other silly things. So when LearnLiberty publishes something like this video it makes me really, really happy. I tried the genre once, too.

Even thought it is fun and awesome to have something mixing zombies and economics, I’m not convinced by the analysis in the video. What I like about zombie fiction is the alternative institutional arrangements that authors come up with precisely because of the breakdown of traditional institutions to support exchange and peaceful cooperation. What is interesting is precisely that there is no more room for money to emerge, and no more room for anything even resembling a market price. It is not whether bullets or shovels will become the new currency. How do you trade and cooperate with one another when there is no significant rule of law left? How do you enforce contracts? What happens to cooperation when your temporal horizon becomes dramatically uncertain?

Anthony Davies asks the question, but cannot answer, of whether zombies face decreasing marginal utility. Of course not! The lights are on but nobody’s home for zombies. They don’t feel needs, they are never satisfied, and know no fear. They are moved only by stimuli and reflex. That is precisely what is scary about them; what happens to traditional defense techniques, strategies and weapons when there is no such thing as dissuasion? What happens when your opponents have perfectly inelastic demand for feasting on your guts? How do zombie apocalypse entrepreneurs solve all of these challenges?

I can’t wait for this project to come out, and hopefully answer some of those questions.

[T]he mischief takes a wide range. Those who have been accommodated with loans must pay, whatever their readiness or ability to do so. Further advances cannot be obtained. Other banks must call in their loans and refuse to extend credit in order to fortify themselves against the uneasiness and even terror of their own depositors. Confidence is destroyed. Enterprises are stopped. Business is brought to a standstill. Securities are enforced. Property is sacrificed, and disaster spreads from locality to locality. All these incidents of the banking business are matters of common knowledge and experience.

Court of Kansas. 1911. Schaake v. Dolley, 118 p. 80, 83 (Kansas denying a charter to a new bank because “the economy could not support another bank”).

“Sokal experiments” are experiences that test the robustness, and the peer review or editorial review processes of academic journals. The original Sokal experiment was done by physics professor Alan Sokal, where he tricked a journal into publishing a terrible paper, full of nonsense and pretentious lingo, to demonstrate biases, and a lack of intellectual rigor in some strands of social sciences. In another such experiment, papers that were originally rejected but ultimately published elsewhere were resubmitted to the same journals, and quite often succeeded on their second run, suggesting (1) that there is a lot of blind chance involved and that (2) editors and peer reviewers don’t follow literature all that well. You can find many other “Sokal experiments” out there.

One of them, recently published in Science is trying a similar experiment, specifically targeted at open-access journals. They submitted a paper that “any reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper’s short-comings immediately. Its experiments are so hopelessly flawed that the results are meaningless.” It is basically a demonstration of the fact that it is not because it is written somewhere (or published somewhere) that it is true.

Open-access journals are under a lot of criticism, because it is more or less on a “pay-to-publish” basis. It is often implied these journal’s incentive structure is such that they would have lower standards because of the money involved. What I think is implied by these studies is that bad review processes and open-access journals ultimately hurt “science”, as all researchers in a discipline are affected by the publication of a bad papers by lowering the standards of science. This is examplified by the citation in the coda: “Journals without quality control are destructive, especially for developing world countries where governments and universities are filling up with people with bogus scientific credentials”. I think it is an overstatement to describe these bad papers as destructive, and that the costs are not as diffused as these studies imply. If it does have a real impact on recruitment (something none of those studies explore), I think it might say a lot more about the flaws of these hiring processes than the open-access journals.

While Science’s study certainly is informative, and I’m sure there are tons of terrible papers published in these journals, I’m not sure the experiment is particularly significant. First, it is also the case that some bad papers sometimes end up in good, properly peer reviewed journals. All of our disciplines are becoming both increasingly specialized, while at the same time often very much interdisciplinary. That makes peer review rather difficult in many cases.

But my biggest problem with many of these experiments is that it takes a much too narrow definition of what peer review is. If you define it strictly as the process involved 3 parties (the author, the editor, and the blind reviewers), then yes these experiments show that it has failed. But peer review, in the broad sense, is much more decentralized than that. It is reputation of these researchers at large, and it is also the individual responsibility of researchers to be cautious about who and what they cite. It is an on-going process that doesn’t either start or stop with publication.

For example, if I’m going to make a controversial claim in print and I want it to be supported by previous literature, I’m probably not going to cite Nobody from Nowhere University, in the No-name Review of Anything Goes. I’m going to find reputable sources. Sometimes it might be enough to cite good authors, publishing material in lower tier open-access journal, or the other way around. Criteria for these things are largely tacit, and are most of the time subject to disagreement.

Also, double blind-review is only a first screening process. The real test comes with the passage of time, and whether the paper gets picked up and cited by other researchers. Whether it has an impact on the scientific community. Only after a while can you judge whether a paper was good, or important, or might be suited for what kind of citation.

Because of these reasons, the process of the double-blind review shouldn’t become a fetish. It serves a purpose, but it’s goal might be filled by other means. As such, terrible papers in peer reviewed journals, or supposedly peer reviewed, might not hurt “science” as whole, but mostly these journals themselves.

Ultimately, a more significant experiment would take a much, much longer time frame and study not only the double blind-review process, but also the article’s citation pattern and its impact at large. Those experiments, however, are sure to fail, and would demonstrate that blatantly bad research doesn’t get anywhere, even if it is published. If the focus is bad recruitment, than those studies should focus on recruitment, rather review processes at open-access journals as a distant proxy. And that is the problem with “Sokal experiments.”

%d bloggers like this: