Archives for posts with tag: George Selgin

The Online Library of Liberty’s Liberty Matters debate forum just hosted a very interesting discussion on Ludwig von Mises’ Theory of Money and Credit (1912). The lead essay is by Lawrence H. White, with comments by Jörg Guido Hülsmann, Jeffrey Hummel, and George Selgin, and a final reply by White.

It contains, among other things, an enlightening reply by White on Mises’ purported disapproval of free banking, and free banking’s supposed procyclicality. Other topics includes a reassessment of the original contributions of Mises’ book and how his “regression theorem” holds up with the emergence of bitcoins.

This passage in Hummel’s comment was of particular interest to me;

[W]e must carefully distinguish between favoring free banking as a legal regime and predicting how it would operate in practice. I think Larry goes too far when he seems to imply that Mises had in mind the kind of free banking that he (1999) and George (1988) predict would emerge without regulation: that is, a system in which reserve ratios are extremely low and banks adjust the money supply to demand in a way that stabilizes velocity. As much as I may agree with their prediction, I can assure them that Sennholz repeatedly affirmed his belief that unregulated competition among banks would drive reserve ratios up very high and possibly close to 100 percent, and he left the impression that such was Mises’s opinion as well.

Of course Hummel knows that both White’s and Sennholz are equally predictions, and admits his own support for the idea that reserve ratios would be extremely low. But what I want to get to is that Sennholz’s predictions are much less supported than White’s are. They are not equal predictions. This is important because elsewhere one-hundred-percenteers have suggested that the market would favor 100% reserves anyway.  White replies to this passage;

Mises in Human Action (p. 446) does quote Cernuschi to the effect that free banking would have narrowed the use of banknotes considerably, and in other ways suggests that reserve ratios under free banking would be, as Hummel puts it, “up very high and possibly close to 100 percent.” If that is Mises’s prediction, then on this point I do depart from Mises. In my 1992 essay that Hummel cites, I criticized Mises for suggesting that free banking would produce reserve ratios close to 100 percent. The best historical evidence we have, from the Scottish free-banking system and other mature systems, shows reserve ratios below 10 percent.

The historical evidence is one way of answering this. In my own research with Antoine Gentier (unpublished) we found that New England’s freest banking systems in terms of both freedom of entry and banking regulation (ie not in the “Free Banking Laws” sense) had similarly low reserve ratios. But there are also theoretical reasons.

Competition over bank’s financial stability does not only occur over reserve ratios. In our study, for example, banks competed over capitalization levels to prove their resilience. But they could also be competing over the liquidity of their assets, their demand debt to total debt ratio, or a variety of  “living will” arrangements (liability regime of shareholders, option clauses, clearinghouse memberships, etc.) just to name a few. I’m going to conjecture (and derogate from Selgin’s comment on the use of statistics), and say that given the prevalence of banknote circulation as a source of banking profit in free banking systems relative to the costs of these other ways banks can prove their financial stability, it is not at all a blind prediction, or one merely supported by historical anecdotes, to say that reserve ratios would be closer to 1% than they would to 100% under free banking. In fact, it would take a particularly unfree institutional environment for competition between banks to lead to 100% reserve ratios.


The Diamond-Dybvig framework assumes that the bank cannot distinguish between short-term agents that withdraw for effective consumption needs and long-term agents withdrawing because they self-fulfillingly anticipate a run. Mixed with the sequential service constraint, even if the bank invokes a suspension clause there is a risk that short term agents would be at the end of the queue, and seemingly starve to death. How does this assumption hold up in 21st century banking where we have algorithms to instantly detect unusual withdrawals, to protect depositors from fraud? Is it science-fiction to think those algorithms could be calibrated to trigger a variant of the suspension clause on panicky depositors exclusively?

I’m asking because in his 1993 paper George Selgin criticizes the bank suspension as portrayed in Diamond-Dybvig. While so-called “bank holidays” fit the Diamond-Dybvig suspension, some better conceived bank suspension policies were more partial in the sense that depositors could still use their checkbooks and banknotes to consume. They didn’t starve. But bank suspensions might also be more partial in other ways; convertibility might be suspended only for depositors that seem to be in panic. In fact, even thought it was aimed at predatory redemption “duels” rather than Diamond-Dybvig “panic” runs, the option clause of the Scottish free-banking experience was not always used as a blanket measure, applying systematically to all banknotes. In some instances of duels, bona fide customers could still convert their notes while the clause was invoked against other banks’ agents. Granted, it might be easier to tell a regular customer from a competing bank’s employee, than it is to tell apart a customer withdrawing for real needs from a customer that’s panicking, if only because competitors would present a much bigger volume of notes for redemption than your regular customers ever would. But with  nowadays’ technology…?

%d bloggers like this: