Archives for posts with tag: Research

I’ve had three working papers made available on SSRN recently. One is economic history, another one is political economy, and the third one is contract theory. Two of them are related to free banking, and two are related to insolvency. In order of pre-publication:

  1.  Free Banking and Economic Growth in Lower Canada, 1817-1851, with Vincent Geloso

    Generally, the historical literature presents the period from 1817 to 1851 in Lower Canada (modern day Québec) as one of negative economic growth. This period also coincides with the rise of free banking in the colony. In this paper we propose to study the effects of free banking on economic growth using theoretical and empirical validations to study the issue of whether or not economic growth was negative. First of all, using monetary identities, we propose that given the increase in the stock of money and the reduction in the general price level, there must have been a positive rate of economic growth during the period. We also provide complementary evidence drawn from wages that living standards were increasing. It was hence impossible for growth to have been negative. Secondly, we propose that the rise of privately issued paper money under free banking in the colony had the effect of mitigating the problem of the abundance of poor quality coins in circulation which resulted from legal tender legislation. It also had the effect of facilitating credit networks and exchange. We link this conclusion to the emergence of free banking which must have been an important contributing factor. Although we cannot perfectly quantity the effect of free banking on economic growth in Lower Canada, we can be certain that its effect on growth was clearly positive.

  2. Robust Political Economy and the Insolvency Resolution of Large Financial Institutions

    This research applies the robust political economy framework to a comparative institutional analysis of large US financial institutions insolvency procedures. The regimes investigated will be the bailout of financial institutions, Dodd-Frank Act’s Orderly Liquidation Authority, both through procedures that follow original intent and through a ‘bail-in’ route, and 3 bankruptcy possibilities including Chapter 11, a so-called “Chapter 14,” and a mandatory auction mechanism used as a benchmark. We study the robustness of these regimes’ procedures through 5 criteria, both ex ante and ex post. These are the initiation of insolvency procedures, Too-big-to-fail moral hazard, the filtering mechanism, the allocation of resources, and their alleged systemic externalities containment abilities.

  3. In Which Context is the Option Clause Desirable?

    The option clause is a contractual device from free banking experiences meant to prevent banknote redemption duels. It has been used within the Diamond and Dybvig [Douglas W. Diamond and Philip H. Dybvig. 1983. “Bank Runs, Deposit Insurance, and Liquidity.” Journal of Political Economy 91 (3): 401-419] framework to suggest that very simple contractual solutions can act as an alternative to deposit insurance. This literature has however been ambiguous on whether the option clause can replace deposit insurance outside of those two contexts. It will be argued that the theoretical clause does not generally affect the likelihood that a solvent bank goes bankrupt because of a bank run, as empirical evidence suggests it is already near null, and that the exercise of the clause will have the effect of diminishing the size of creditor claims on bank assets because it exacerbates the agency problem of bank debt. It will therefore be argued that the clause is only desirable in (a) free banking systems that are historically devoid of bank runs in the first place and have other means of managing debt-related agency problems, and (b) under the unrealistic assumption that bank runs are self-fulfilling prophecies. It will be argued that the agency problem of bank debt make the option clause undesirable outside of free banking systems.

There’s more where those came from, I might add another one soon, on bankruptcy theory.

Advertisements

“Sokal experiments” are experiences that test the robustness, and the peer review or editorial review processes of academic journals. The original Sokal experiment was done by physics professor Alan Sokal, where he tricked a journal into publishing a terrible paper, full of nonsense and pretentious lingo, to demonstrate biases, and a lack of intellectual rigor in some strands of social sciences. In another such experiment, papers that were originally rejected but ultimately published elsewhere were resubmitted to the same journals, and quite often succeeded on their second run, suggesting (1) that there is a lot of blind chance involved and that (2) editors and peer reviewers don’t follow literature all that well. You can find many other “Sokal experiments” out there.

One of them, recently published in Science is trying a similar experiment, specifically targeted at open-access journals. They submitted a paper that “any reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper’s short-comings immediately. Its experiments are so hopelessly flawed that the results are meaningless.” It is basically a demonstration of the fact that it is not because it is written somewhere (or published somewhere) that it is true.

Open-access journals are under a lot of criticism, because it is more or less on a “pay-to-publish” basis. It is often implied these journal’s incentive structure is such that they would have lower standards because of the money involved. What I think is implied by these studies is that bad review processes and open-access journals ultimately hurt “science”, as all researchers in a discipline are affected by the publication of a bad papers by lowering the standards of science. This is examplified by the citation in the coda: “Journals without quality control are destructive, especially for developing world countries where governments and universities are filling up with people with bogus scientific credentials”. I think it is an overstatement to describe these bad papers as destructive, and that the costs are not as diffused as these studies imply. If it does have a real impact on recruitment (something none of those studies explore), I think it might say a lot more about the flaws of these hiring processes than the open-access journals.

While Science’s study certainly is informative, and I’m sure there are tons of terrible papers published in these journals, I’m not sure the experiment is particularly significant. First, it is also the case that some bad papers sometimes end up in good, properly peer reviewed journals. All of our disciplines are becoming both increasingly specialized, while at the same time often very much interdisciplinary. That makes peer review rather difficult in many cases.

But my biggest problem with many of these experiments is that it takes a much too narrow definition of what peer review is. If you define it strictly as the process involved 3 parties (the author, the editor, and the blind reviewers), then yes these experiments show that it has failed. But peer review, in the broad sense, is much more decentralized than that. It is reputation of these researchers at large, and it is also the individual responsibility of researchers to be cautious about who and what they cite. It is an on-going process that doesn’t either start or stop with publication.

For example, if I’m going to make a controversial claim in print and I want it to be supported by previous literature, I’m probably not going to cite Nobody from Nowhere University, in the No-name Review of Anything Goes. I’m going to find reputable sources. Sometimes it might be enough to cite good authors, publishing material in lower tier open-access journal, or the other way around. Criteria for these things are largely tacit, and are most of the time subject to disagreement.

Also, double blind-review is only a first screening process. The real test comes with the passage of time, and whether the paper gets picked up and cited by other researchers. Whether it has an impact on the scientific community. Only after a while can you judge whether a paper was good, or important, or might be suited for what kind of citation.

Because of these reasons, the process of the double-blind review shouldn’t become a fetish. It serves a purpose, but it’s goal might be filled by other means. As such, terrible papers in peer reviewed journals, or supposedly peer reviewed, might not hurt “science” as whole, but mostly these journals themselves.

Ultimately, a more significant experiment would take a much, much longer time frame and study not only the double blind-review process, but also the article’s citation pattern and its impact at large. Those experiments, however, are sure to fail, and would demonstrate that blatantly bad research doesn’t get anywhere, even if it is published. If the focus is bad recruitment, than those studies should focus on recruitment, rather review processes at open-access journals as a distant proxy. And that is the problem with “Sokal experiments.”

%d bloggers like this: