Archives for category: Essais

For an Institute for Humane Studies program I wanted to participate in you had to write a short essay on how a famous article or book is misguided and inimical to liberty. I wrote the essay below for the occasion, and I’m pretty happy of how it turned out, so I’m sharing it here. Some readers will instantly recognize the heavy influence of chapter 6 of Lawrence H. White’s Theory of Monetary Institutions—get this book.

The seminal paper by Diamond & Dybvig (1983) on bank runs is misguided and inimical to liberty. It suggests that banks are inherently unstable, always on the verge of suffering a “redemption run” at any unrelated ‘sunspot,’ and that it is absolutely necessary that bank runs be suppressed, and that deposit insurance is the most effective way to do it. In their model, if banks ought to survive it has to be through intervention into the financial system. The basic features of this model are still present in most publications on financial stability to this day.

First, unlike the model would suggest, bank runs are generally not responsible for the initial shock. Gorton (1988) studies the National Banking Era in the US, and finds that for each of the 7 crisis he identifies, bank runs were rather the result of a previous event announcing a possible depreciation of banking assets. Likewise, Calomiris (1991) finds that over 1875–1913 all banking panics (generalized run on all banks) happened within the quarter following an abrupt increase in business failures. Mishkin (1991) studies bank panics from 1857 to 1988, and finds that for all but that of 1873, panics occur well after the recession has started.

Secondly, banks that do go bankrupt because of a bank run are those that are pre-run insolvent. Banks that are solvent can generally borrow from other banks and other institutions, historically clearinghouses, have a large repertoire of possible solutions to help banks is crisis. While bank runs and associated liquidity problems can be aggravating factors, even in the worst bank panic episodes they are causes of bank failure only in exceptional circumstances (Kaufman 1987, 1988). Even in the most fruitful historical era in terms of banking panics and runs, the American National Banking Era, runs were a primary cause of failure in only one case out of 594 bank bankruptcies (Calomiris 1991, 154). Calomiris & Mason (1997) study the banking panic of June 1932 in Chicago and find that no pre-run solvent banks failed. Reviewing this literature, Benston & Kaufman (1995, 225) conclude that “the policy implications of the Diamond & Dybvig (1983) model are not very useful for understanding the workings of the extant banking and payments system.”

A third reason is that most runs have in fact been partial “verification” runs. Depositors eventually figure out that the bank will likely survive the crisis, and runs stop. This is impossible in the Diamond & Dybvig (1983) framework; once initiated the run must always go through and make the bank fail. Ó Gráda & White (2003) study a single bank from the 1850s. They investigate depositor behavior through individual account data, and particularly through the panics of 1854 and 1857. The bank survived both. They find that runs are not sudden, but involve a learning mechanism where random beliefs are progressively dropped, while behavior motivated by legitimate signals become more important over time. Panic does not displace learning in the market processes of bank runs.

Finally, if Diamond & Dybvig (1983) is correct, it should apply to all fractional-reserve banking systems without deposit insurance. But, as evidenced by the US-centric literature cited, bank runs are much more common in U.S. history than elsewhere, and bank panics are specific to the American National Banking Era and attributable to bank regulation of that era, such as the ban on branch banking that made mergers with insolvent banks impossible, and the bond deposit system that limited emission at a critical time (Smith 1991). Bordo (1990, 24) compares bank panics internationally and comments that “the difference in the incidence of panics is striking.” While over the 1870–1933 the US had four panics, there were none in Britain, France, Germany, Sweden, and Canada despite the fact that “in all four countries, the quantitative variables move similarly during severe recessions to those displayed here for the U.S.” Table 2-1 in Schwartz (1988, 38–39) report that from 1790 to 1927 the U.S. experienced 14 panics, while the Britain, the only other country with as many observation, experienced 8, all of them before 1867.

Not only does Diamond & Dybvig (1983) suggest bank runs have much higher costs than evidence does, but it also shrouds its benefits. My research suggests that bank runs could play an important role in initiating insolvency procedures earlier, before the bank can enlarge its losses, and therefore limit systemic externalities.


Most people reading this blog are probably already aware of the “dehomogenization” charges lead by Joseph Salerno. In an essay first published in 1992 Salerno argues that Friedrich Hayek’s thought should be dehomogenized from Ludwig von Mises’, strongly implying that the latter is better than the former. This distinction builds on Hutchinson’s (1981) distinction between “Hayek 1” and “Hayek 2.” As the argument goes, somewhere around the publication of the “Economics and Knowledge” essay in 1937, Hayek switched from Mises’ a priorism to Karl Popper’s falsificationism. This assertion is very convincingly challenged by Bruce Caldwell’s 1988 essay, Horwitz’ 2003 essay suggest that there really isn’t much to heterogeneity and that if there is any it’s complementary, and you’ll find an interesting discussion arguing that Hayek in fact was methodologically a Misesian in Roger Koppl’s Big Players and the Economic Theory of Expectations, and also arguing interesting bits and pieces on this debate in Pete Boettke’s Living Economics, among many other essays. Readers might also enjoy the numerous posts on the topic over at Punto de Vista Economico.

I’ve been reading Ross B. Emmett’s 2007 essay titled “Knight’s Challenge (to Hayek): Spontaneous Order Is Not Enough for Governing a Liberal Society” in the volume Liberalism, Conservatism, and Hayek’s Idea of Spontaneous Order edited by Peter McNamara and Louis Hunt. According to Emmett, a constant in Frank Knight’s criticism of Hayek is the role of discussion. This is seen the the capital theory controversy between the two, but also in his reviews of The Road to Serfdom  and The Constitution of Liberty. According to Emmett (p. 69–70);

While the substance of their “capital controversy” need not detain us, Knight drew some interesting conclusions from their exchange regarding the prospects for liberalism; these conclusions foreshadow his criticisms of Hayek some 30 years later. During the controversy the two men corresponded about their differences, and Knight believed they were making progress toward a common understanding through the give-and-take of discussion about specific questions and responses. But then Hayek, unbeknownst to Knight, published an article on the theory of capital that made only a passing reference to Knight’s criticisms. Knight interpreted the article to mean that Hayek would make little effort to respond directly to the specific objections of Knight and others to Austrian capital theory.

Emmett documents how Knight emphasized the role of discussion both from methodological and political philosophy perspectives.  According to Knight, discussion not only has role in science, but also in law making. The idea of a free society for Knight being ‘the search for agreement by discussion, which advances in response to “specific questions” or particular problems, rather than a “systematic exposition” of abstract positions’ (ibid). Knight thought these two forms of discussion, scientific and politic, were absent in Hayek.

Let’s however concentrate on the methodological portion of Knight’s criticism. It is more obvious and explicit in his reply to Hayek’s early work (pre-1937), but it’s really present throughout. What I thought was highly interesting is that, at least in Knight’s criticism, Hayek is the one having a Misesian methodology (Knight calls it a “systematic exposition”) and Knight is the one adopting the more Popperian position concerning the role of discussion among scientists (“the meeting of specific questions is the way to ‘advance knowledge'” from Knight’s 1934 letter quoted in Emmett). I’m not sure whether this is enough of a distinctive and unique trait of Popper’s methodology. It’s pretty central in his French essays but seems not to be really a focus in the Anglo-Saxon secondary literature on Popper.

I would be interested in hearing from people more familiar with Knight and Popper than I am on this topic.

“Sokal experiments” are experiences that test the robustness, and the peer review or editorial review processes of academic journals. The original Sokal experiment was done by physics professor Alan Sokal, where he tricked a journal into publishing a terrible paper, full of nonsense and pretentious lingo, to demonstrate biases, and a lack of intellectual rigor in some strands of social sciences. In another such experiment, papers that were originally rejected but ultimately published elsewhere were resubmitted to the same journals, and quite often succeeded on their second run, suggesting (1) that there is a lot of blind chance involved and that (2) editors and peer reviewers don’t follow literature all that well. You can find many other “Sokal experiments” out there.

One of them, recently published in Science is trying a similar experiment, specifically targeted at open-access journals. They submitted a paper that “any reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper’s short-comings immediately. Its experiments are so hopelessly flawed that the results are meaningless.” It is basically a demonstration of the fact that it is not because it is written somewhere (or published somewhere) that it is true.

Open-access journals are under a lot of criticism, because it is more or less on a “pay-to-publish” basis. It is often implied these journal’s incentive structure is such that they would have lower standards because of the money involved. What I think is implied by these studies is that bad review processes and open-access journals ultimately hurt “science”, as all researchers in a discipline are affected by the publication of a bad papers by lowering the standards of science. This is examplified by the citation in the coda: “Journals without quality control are destructive, especially for developing world countries where governments and universities are filling up with people with bogus scientific credentials”. I think it is an overstatement to describe these bad papers as destructive, and that the costs are not as diffused as these studies imply. If it does have a real impact on recruitment (something none of those studies explore), I think it might say a lot more about the flaws of these hiring processes than the open-access journals.

While Science’s study certainly is informative, and I’m sure there are tons of terrible papers published in these journals, I’m not sure the experiment is particularly significant. First, it is also the case that some bad papers sometimes end up in good, properly peer reviewed journals. All of our disciplines are becoming both increasingly specialized, while at the same time often very much interdisciplinary. That makes peer review rather difficult in many cases.

But my biggest problem with many of these experiments is that it takes a much too narrow definition of what peer review is. If you define it strictly as the process involved 3 parties (the author, the editor, and the blind reviewers), then yes these experiments show that it has failed. But peer review, in the broad sense, is much more decentralized than that. It is reputation of these researchers at large, and it is also the individual responsibility of researchers to be cautious about who and what they cite. It is an on-going process that doesn’t either start or stop with publication.

For example, if I’m going to make a controversial claim in print and I want it to be supported by previous literature, I’m probably not going to cite Nobody from Nowhere University, in the No-name Review of Anything Goes. I’m going to find reputable sources. Sometimes it might be enough to cite good authors, publishing material in lower tier open-access journal, or the other way around. Criteria for these things are largely tacit, and are most of the time subject to disagreement.

Also, double blind-review is only a first screening process. The real test comes with the passage of time, and whether the paper gets picked up and cited by other researchers. Whether it has an impact on the scientific community. Only after a while can you judge whether a paper was good, or important, or might be suited for what kind of citation.

Because of these reasons, the process of the double-blind review shouldn’t become a fetish. It serves a purpose, but it’s goal might be filled by other means. As such, terrible papers in peer reviewed journals, or supposedly peer reviewed, might not hurt “science” as whole, but mostly these journals themselves.

Ultimately, a more significant experiment would take a much, much longer time frame and study not only the double blind-review process, but also the article’s citation pattern and its impact at large. Those experiments, however, are sure to fail, and would demonstrate that blatantly bad research doesn’t get anywhere, even if it is published. If the focus is bad recruitment, than those studies should focus on recruitment, rather review processes at open-access journals as a distant proxy. And that is the problem with “Sokal experiments.”

Something I’ve been hearing a lot is that the French classical-liberal school of economics disappeared, somewhere around the end of the 19th century and the beginning of the 20th century, because of the professionalization of economics. The argument, as it goes, is that classical-liberals had been petitioning the government for Chairs of political economy in Law faculties for nearly 40 years. When they were finally created for provincial faculties in 1883, only the agrégés de droit, doctors in French and Roman law who had passed a public contest, were allowed to fill these jobs. These agrégés, it is said, were both not sufficiently formed in political economy, and opposed to classical liberalism as a professional deformation. They were naturally inclined to accept the teachings of the German historicists. “Thus the liberal school which, in blatant contradiction to its own politico-economic principles, had campaigned long and hard for a State solution to a perceived educational problem, was hoist with its own petard” writes Salerno.

In a sense I am sympathetic to this interpretation, why would the Chairs of political economy have bitten the hand that feeds them? However, the situation is much more nuanced than this account allows for. It is not the case that “not one of the liberal candidates was an agrégé and only two or three were docteurs en droit”. Out of 13 newly created Chairs, at least two were occupied by classical-liberals. Those were Alfred Jourdan, in Aix-en-Provence, and Edmond Villey in Caen. Even the historical Chair at the Paris Law faculty was at the time occupied by a classical liberal, Paul Beauregard, later substituted by Auguste Souchon (also classical-liberal) before he accepted a newly created Chair of rural economics. Other Chairs, outside Law faculties, were also created and given to classical liberals, such as that of the Collège de France to Henri Baudrillart, and later to Pierre-Émile Levasseur. Classical-liberal economists of the Say-Bastiat kind were among the newly professionalized economists, though other sensitivities were perhaps disproportionately represented.

The charge that classical-liberals that weren’t agrégés de droit disappeared because they were barred from entry into the Chaire d’économie politique is also more nuanced. Starting in 1891, the law agrégation featured an economics option, and starting in 1896, before classical-liberals can be said to have disappeared, economics had their own distinct agrégation. The classical-liberal had their representative on this jury through Pierre-Émile Levasseur. It is however good to remember that there were very few agrégés jobs to be granted, and Chairs were at the time “handed from father to son invoking cooptation, from father-in-law to son-in-law, from uncle to nephew and nephew through marriage”, in a way that left very little place to them. The classical-liberals were not strangers to this brand of corporatism, as this previous quote is from Walras’ autobiography and describes his experience with the classical-liberals and their early stranglehold over French institutions.

The opposition, still according to Salerno, was embodied by the journal co-created by Charles Gide to compete with the Journal des économiste, the main classical-liberal publication. Indeed, the Revue d’économie politique was created in 1887 by those agrégés de droit holding Chairs, seemingly in direct reaction to an attack by French free banker Jean-Gustave Courcelle-Seneuil on the quality of economic teachings in Law faculties. Yet, among its editorial committee, half were classical-liberals for the first few years, until Alfred Jourdan passed away. While it did publish German historicism, it also published decidedly “Paris school” oriented articles, and even the writings of Austrian economists such as Menger and Böhm-Bawerk. Even its mission statement was not  dedicated to the “avowed programme of reaction against the doctrines of the optimist Liberal school, and the propagation of foreign, especially German economic schools”, like Charles Gide would later retcon, but an eclectic mix of all those things and more, like sociology, reflecting the very diverse influences of French law professors holding Chairs of political economy. If it should be linked to Charles Gide’s person, it could be said that it reflected Charles Gide’s ‘jack of all trades’ interests rather than his penchant for German historicism, or even interventionism.

So, why did the French liberal school disappear? There are several reasons, that do include French corporatism in Law faculties that had been adverse to them. A large chunk of the explanation, I believe, lies in the fact that eventually the classical-liberals became dilettante, more interested in doing politics and other activities than publishing. Their production diminished, and eventually disappeared only to have to be “rediscovered” in France in the 1970s. The disappearance of the French liberal school, in a certain sense, is to be found in too little professionalization rather than too much. The attraction of German historicism also shouldn’t be neglected, as the Methodenstreit seems to have made in France much more adepts of German methodologies rather than the Austrian’s. It would require more research, but it might even have “turned” some classical-liberals.

The word zombie is sometimes used to refer to firms that are virtually insolvent, in a state where they can merely afford to service their debt, or to refer to government sponsored vehicles where bad assets are stashed to clear banks’ balance sheets from underperforming loans. The zombies I’m concerned with here, however, are actual flesh eating undeads. I think zombie fiction is especially interesting to economists because it mirrors a lot of debates going on within the profession.

A lot of people do not understand what zombies are all about and miss out on some great fiction. While it is true that zombies are the most brainless horror flick monsters around, it does not mean that zombie movies are senseless gore films. In fact, there’s a long tradition of using zombies in media as a plot device to push a social commentary and reflect on human behavior. Because zombies are clumsy, mindless, and generally easy to trick or avoid, zombie stories are not so much about the zombies, but about the survivors and how they cooperate. Zombie fiction allow us to witness miscooperation leading to dire consequences without having to experience these situations, just like economists use economic models (in the very loose sense) to reflect upon economic miscooperation because they don’t have the luxury of experimenting.

Because zombies are so easy to overcome, storylines have relied on other threats, which you could call zombie survival market failures. From an economist’s point of view, a lot of zombie stories involve variants of close-ended non-cooperative games, where egos and foul play get in the way of happy Pareto optimal endings. The model for human behavior is generally one where humans would have perfect chances of surviving if they could execute their plan, but where adverse selection and moral hazard make it so that they cannot spontaneously coordinate themselves. That is, humans have good expectations about what needs to be done to survive the post-apocalyptic world, but plan conflicts, absence of a consensus, betrayal and unenforceable contracts ultimately always lead to some of the most tragic possible outcome.

Think of George A. Romero’s Night of the Living Dead, the movie to which we owe modern zombies. It’s a huis clos movie where the survivors take shelter in a farm surrounded by flesh eating ghouls. Leaving aside the rather clumsy class struggle theme, the demise of the group is not so much due to the living dead trying to break in, but rather to the failure of survivors to cooperate and agree on a plan. The group on the ground level has a plan to resist zombie attacks that requires the unanimous cooperation of the group taking refuge in the cellar. The group in the cellar needs the collaboration of the other one for their radio, apparently a precious asset during zombie invasions. The zombies ultimately feed on their failure to agree. Other zombie movies by Romero explored similar themes, where safe havens that could have been shared are ultimately invaded and destroyed, leaving everyone worse off. This is the dominating theme in what I would call first generation zombie fiction, with more recent entries such as Zombieland also touching on it.

These behaviors can seem a little wooden, and overly pessimistic about human nature. In the face of certain death people would know exactly what to do to survive, but wouldn’t be able to do it because they value coming out on top of an argument or being in charge more highly than being alive? Moreover, in a post-apocalyptic world where a broken leg or a simple cut that gets infected can be the end of you, why is it that it is always failure to cooperate that leads to death instead of tragic unforeseen events?

Of course, in real life people do figure out how to coordinate themselves, and they’re rather inventive in the ways they do. Just think of the diversity and plurality of answers to coordination challenges; how some resources are managed by private firms, some by non-profit organizations, some by something in-between. And just think of the inscrutable mix of a lot of types of organizations and institutions that have emerged from our cooperation efforts to guard and enforce these agreements. It is true however that in a situation of urgency there’s no reason the survival learning process would be quick enough, and survivors adapt timely – zombies are not very forgiving. Still, the overall message of classic zombie fiction seems both overly pessimistic about human collaboration, and overly optimistic about human expectations.

Fortunately, what could be called a second generation of zombie storytelling “models” human cooperation better. In Left 4 Dead, Mountain Man, or Day by Day Armageddon for example, zombie apocalypse survivors do collaborate toward a plan, and there is clearly less betrayal toward  one’s own certain death. In these narratives, survivors lose their peers not necessarily due to a failure to coordinate, but because of truly unforeseen events. There is a sense that danger is unpredictable and all around the survivors. They don’t need a final zombie attack to come before they’ve agreed on a course of action for zombies to feast on human sashimi.

Having imperfect survivors that are capable of learning and innovating, yet are radically ignorant (instead of “socially challenged” Judas) changes the whole dynamic of zombie invasions. It allows authors to explore other themes such as anti-militarism. An example of a common theme is that government response always worsens the problem because of the knowledge problem, ordering survivors to seek shelters in areas that have already fallen prey to zombies. The only time where the military actually improves the situation is through insubordination or desertion. Often in those stories the only path to survival is individual initiative and rugged survivalism, an admittedly caricatural version of entrepreneurship. Of course this is not true of all zombie fiction, the popular zombie novel World War Z is not much else than a glorification of the war economy and government crisis management.

But mostly, having more realistic ideal types allows authors in The Walking Dead graphic novels (it’s less obvious on the TV show) to have groups of survivors experiment with a host of governance structures as their context and goals evolve. These range from a state of spontaneous leaderless voluntary cooperation, to characters imposing their tyranny upon a small community, with varying levels of success at their survival efforts. For example, the Governor leads his group with an iron fist, ultimately suppressing the feedback that would have signaled the Governor that his plan was wasteful. In Rick’s group, on the other hand, projects are generally more bottom-up initiatives validated or rejected by peers, and are much more successful. The franchise often explores problems associated with welcoming new survivors and their effect on enforcement and guarding costs.

Since it is customary to finish with an unsolicited policy advice, an implication derived from zombie fiction that will make it sound way more serious than it was ever meant to be; in times of crisis the law of association is more important than ever to increase the division of labor and make each party’s efforts more productive. Funny how, for zombie apocalypses just like for real world problems, themes emanating from Austrian economics seem to capture the problem of human cooperation better than mainstream economics, huh?

« Law & Economics »?

A new Think tank I’ve been collaborating with, Droit & Croissance (or as they call it in English, Rules for Growth), asked me and Pierre Bentata to define just what is Law & Economics. We gave a pretty standard textbook definition (aka neo-classical), strongly inspired by Paul H. Rubin’s Concise Encyclopedia of Economics entry on law & economics, but it sticks to what it is that they do. It would have been out of place to venture into the emergence and origin of Law and more advanced topics, but we still managed to cite Hayek in there.

Le « Law & Economics », ou analyse économique du droit, est plus que la rencontre du droit et de l’économie, puisqu’il s’agit d’une forme d’analyse juridique. Il ne s’agit pas non plus du « Droit économique », mais bien d’une analyse juridique utilisant les outils de l’économiste. Elle cherche à expliciter un ordre sous-jacent au droit, une logique du droit en dehors du droit lui-même, qui nous permet de le comprendre et d’étendre ses concepts de façon cohérente à des situations jusque-là inédites.

Dans un commentaire sur Contrepoints, Emmanuel Martin répond à mon billet du 3 janvier, “Le réalisme en sciences économiques”. Je me permets de reproduire son commentaire ici parce qu’il est plus complet, précis, et emploi une discours beaucoup plus académique que mon billet, écrit dans une langage familier pour s’adresser au problème familier. Emmanuel confirme certains de mes arguments et apporte des corrections et clarifications à certains autres:

Il y a réalisme et réalisme.

Le réalisme se réfère en premier lieu à la doctrine épistémologique selon laquelle la réalité existe indépendamment de l’observateur et des théories qu’il peut en faire (et se pose ainsi contre toute philosophie de l’esprit). Il y a donc une réalité objective à étudier.

Il se trouve que ce niveau ontologique d’objectivité des faits et des lois de causalité porte en sciences sociales sur le contenu subjectif des faits sociaux (évaluations, anticipations, buts…), c’est-à-dire le niveau ontique (voir Hayek 1952). L’objectivisme ontologique est donc parfaitement compatible avec le subjectivisme ontique (voir Uskali Mäki 1990).

La démarche réaliste consiste à pratiquer des abstractions de la réalité des particuliers de manière à en tirer des universaux (tant en termes d’objets qu’en termes des relations causales – de nécessité – qui les co-ordonnent). Le réalisme aristotélicien est immanent par opposition à transcendantal : l’existence des universaux dépend donc de celles des particuliers, c’est-à-dire que les universaux existent dans les particuliers qui les exemplifient, et pas dans une sphère indépendante de la réalité (comme le monde des Idées chez Platon).

Ce réalisme philosophique doit être distingué de la notion de réalisme que l’on trouve dans le « réalisme des hypothèses » (Mäki 1990), mais nous devons reconnaître le lien étroit entre les deux. Toute isolation théorique de la réalité empirique nécessite toujours un certain degré « d’irréalisme » au sens des hypothèses. Le tout est de savoir quel type d’isolation permet une abstraction des éléments justement essentiels de la réalité pour juger de cet irréalisme. A ce titre, le type d’isolation adopté est guidé par les buts scientifiques qui peuvent diverger.

Les théories en termes de modèle (fonctionnalisme) cherchent à capturer la réalité dans un moule formel par souci de prédiction, alors que les théories de processus causaux (causalité génétique) tentent d’isoler les éléments essentiels de la réalité par souci de compréhension (NB : la distinction entre théories génético-causales et théories fonctionnelles est effectuée par Mayer en 1932), ce qui n’exclut cependant pas la prédiction.

Les autrichiens faisaient face à une montée en puissance des théories en termes de modèles formels et tentaient de proposer une perspective théorique en termes de compréhension de la réalité. Cette compréhension nécessite de mettre à jour certaines caractéristiques essentielles de la réalité. Cette démarche réaliste nécessite donc une dose d’irréalisme dans le processus d’abstraction, mais le critère pour juger de la nécessité et du caractère approprié de cet irréalisme est bien la « façon dont marche le monde », c’est à dire ses caractéristiques « essentielles » (Mäki 1993, 2001).

Reférences rapides :

HAYEK, Friedrich (1952/1953/) Scientisme et sciences sociales, Plon.
MÄKI, Uskali (1990) « Scientific realism and Austrian explanation », Review of Political Economy, Vol. 2, n° 3, pp. 310-344.
MÄKI, Uskali (1993) « The Market as an Isolated Causal Process: A Metaphysical Ground for Realism » in Bruce Caldwell & Stephan Boehm (eds), Austrians Economics: tensions and New Directions, Kluwer Academic Publishers, pp.
MÄKI, Uskali (1994) « Isolation, Idealization and Truth in Economics », Poznan Studies in the Philosophy of the Sciences and the Humanities, Vol. 38 pp. 147-168.
MÄKI, Uskali (2001) « The Way the World Works (www): towards an ontology of theory choice », in The Economic World View: Studies in the Ontology of Economics, édité par Uskali Mäki, Cambridge University Press, pp. 369-389.

L’économie autrichienne attire énormément de curieux, de novices et de profanes vers elle. Ou, du moins, beaucoup plus que les autres courants de pensée en sciences sociales, mis à part peut-être le marxisme. Souvent, en tant qu’économiste, je suis surpris de voir à quel point des gens qui n’ont aucune formation en économie, et pour qui la façon de penser de l’économiste est relativement nouvelle, comprennent pourtant bien certaines théories. Il y a bien évidemment aussi certaines lacunes. Par exemple beaucoup de ces économistes amateurs (je ne dis pas ça de façon péjorative) n’ont pas un esprit critique aussi fin que celui de mes collègues. Un autre comportement fréquent est une certaine myopie; ils écartent certains auteurs en fonction de critères superficiels comme leur appartenance ou non-appartenance, réelle ou ressentie, à certains courants de pensée.

Un argument que je vois souvent utilisé pour défendre l’école autrichienne, par ces férus de l’école autrichienne mais aussi parfois chez certains économistes, est que ses théories sont plus “réalistes” que celles des autres écoles de pensée. Cela peut s’interpréter de plusieurs façons qu’on peut généralement regrouper en deux catégories : (1) les hypothèses de l’école autrichienne sont plus réalistes, (2) les conclusions sont plus près de la réalité.

Bien entendu, aucun économiste ne cherche à être complètement déconnecté de la réalité et à avancer une théorie complètement irréaliste. Pourtant, il vaut mieux ne pas utiliser l’adjectif réaliste pour parler de l’école autrichienne, et ce pour une multitude de raisons.

Ceux qui prétendent que l’école autrichienne avance une théorie aux hypothèses plus “réalistes” le font parfois à partir du sentiment que celle-ci aborde de véritables hommes et non pas comme des calculettes maximisatrices. Sans suggérer que s’attacher à la réalité est superflu, ou que la théorie gagne à être complètement abstraite, un certain niveau d’abstraction, et donc de détachement de la réalité, est nécessaire. Il y a un exemple que j’aime donner à mes étudiants pour leur expliquer ce qu’est un modèle; celui de la mappemonde. Si l’on devait créer une mappemonde “réaliste”, qui traduit parfaitement la réalité, elle serait… complètement inutile. Tout d’abord elle serait à l’échelle 1:1, donc impossible à transporter avec soi, ensuite les routes et les villes seraient grandeur nature, donc difficilement visible sans l’abstraction des traits de couleurs et des grandes lettres indiquant les lieux. Cette mappemonde “réelle” ne nous serait d’aucune aide pour établir un itinéraire, ou pour comprendre comment le monde est fait. A l’inverse, les mappemondes comme on les connait, bien qu’elles soient abstraites, bien qu’elles déforment la réalité, bien qu’elles omettent presque tous les détails qui font que le monde est monde, nous aide à comprendre la géographie.

La même chose peut être dite de la théorie économique; un modèle pour expliquer le comportement des hommes qui serait parfaitement réel serait parfaitement illisible. Pour rendre ces comportements lisible et compréhensible, la science économique a justement choisi de se concentrer sur la rationalité. Réfléchir aux choix et actions de l’homme à partir de sa rationalité permet d’isoler certains traits, et ainsi pouvoir théoriser sur ceux-ci. Bien entendu, ce qui fait que l’homme est homme va bien au-delà de sa rationalité. Ceci n’exclut donc pas la possibilité, et la nécessité, des analyses pluridisciplinaires. Comme Hayek le rappelait: “Un économiste qui serait seulement un économiste ne sera jamais un grand économiste, et je suis même tenté d’ajouter que l’économiste qui n’est qu’un économiste est susceptible de devenir une nuisance si ce n’est pas un réel danger.” Peter Boettke se plait à ajouter à cette citation que la seule chose pire qu’un économiste qui ne connait que l’économie est le philosophe politique qui ne connait pas d’économie.

La seconde interprétation serait que la théorie autrichienne décrit mieux la réalité. Or, c’est une interprétation qui pose encore plus problème puisque la théorie autrichienne se veut subjectiviste, et que les phénomènes économiques significatifs, comme la valeur ou les anticipations, n’existent pour ainsi dire que dans le ressenti individuel de chaque individu. Si, en effet, il semble que la théorie autrichienne établisse de meilleures “prédictions”, il faut comprendre qu’elle s’attache précisément à être jugée non pas sur la qualité de ses prédictions, mais sur la qualité de son raisonnement aprioriste. Prétendre qu’elle décrit mieux la réalité objective serait se méprendre sur la nature de ce que la théorie autrichienne tente d’expliquer et sur les critères de scientificité en sciences sociales.

Il n’est donc pas tout à fait exact de prétendre qu’un type d’analyse économique, et tout particulièrement l’analyse de l’école autrichienne, est plus “réaliste” que ses alternatives. On peut en revanche tout à fait défendre que l’école autrichienne explique mieux les phénomènes concernés, qu’elle choisit des abstractions plus pertinentes, ou encore qu’elle s’attaque à des questions qui semblent plus intéressantes et importantes que d’autres écoles de pensée.

Le livre L’Ecole Autrichienne de A à Z édité par le professeur Antoine Gentier et le professeur François Facchini est enfin disponible!

L'Ecole autrichienne de A à ZL’objet de ce livre est de donner des définitions sur les concepts clés développés par l’école autrichienne en science économique et de fournir les principales références bibliographiques relatives à ces concepts. Chaque entrée se compose d’un article accompagné de sa bibliographie. Le livre peut se lire comme un dictionnaire ou en suivant un guide de lecture thématique.

Le livre est destiné à tous les publics curieux de connaître l’état de l’art sur les apports contemporains de l’école autrichienne en science économique. Il peut servir de manuel complémentaire pour donner aux étudiants des définitions sur 50 concepts clés comme l’entrepreneur, les cycles économique, la monnaie, le subjectivisme, le droit, la finance, la concurrence, le monopole, la firme, le prix, le profit, le taux d’intérêt et beaucoup d’autres encore. J’y ai contribué pour les concepts de coût d’opportunité, de temps et de dispersion de la connaissance.

Il s’agit donc d’un ouvrage collectif rassemblant les contributions des auteurs suivants:
Mathieu Bédard, Laurent Carnis, Virginie Doumax, François Facchini, Pierre Garello, Antoine Gentier, Nathalie Janson, Elisabeth Krecké, Youcef Maouchi, Patrick Mardini, David Moroz, Philippe Nataf, Steeve Paillard, Frédéric Sautet, Luc Tardieu.

15 auteurs, 50 articles, 196 pages, un prix HT de 6,95 euros en format livre, 1,49 HT euros en format pour Ipad ou Iphone, une livraison par UPS partout dans le monde.

%d bloggers like this: