“Sokal experiments” are experiences that test the robustness, and the peer review or editorial review processes of academic journals. The original Sokal experiment was done by physics professor Alan Sokal, where he tricked a journal into publishing a terrible paper, full of nonsense and pretentious lingo, to demonstrate biases, and a lack of intellectual rigor in some strands of social sciences. In another such experiment, papers that were originally rejected but ultimately published elsewhere were resubmitted to the same journals, and quite often succeeded on their second run, suggesting (1) that there is a lot of blind chance involved and that (2) editors and peer reviewers don’t follow literature all that well. You can find many other “Sokal experiments” out there.

One of them, recently published in Science is trying a similar experiment, specifically targeted at open-access journals. They submitted a paper that “any reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper’s short-comings immediately. Its experiments are so hopelessly flawed that the results are meaningless.” It is basically a demonstration of the fact that it is not because it is written somewhere (or published somewhere) that it is true.

Open-access journals are under a lot of criticism, because it is more or less on a “pay-to-publish” basis. It is often implied these journal’s incentive structure is such that they would have lower standards because of the money involved. What I think is implied by these studies is that bad review processes and open-access journals ultimately hurt “science”, as all researchers in a discipline are affected by the publication of a bad papers by lowering the standards of science. This is examplified by the citation in the coda: “Journals without quality control are destructive, especially for developing world countries where governments and universities are filling up with people with bogus scientific credentials”. I think it is an overstatement to describe these bad papers as destructive, and that the costs are not as diffused as these studies imply. If it does have a real impact on recruitment (something none of those studies explore), I think it might say a lot more about the flaws of these hiring processes than the open-access journals.

While Science’s study certainly is informative, and I’m sure there are tons of terrible papers published in these journals, I’m not sure the experiment is particularly significant. First, it is also the case that some bad papers sometimes end up in good, properly peer reviewed journals. All of our disciplines are becoming both increasingly specialized, while at the same time often very much interdisciplinary. That makes peer review rather difficult in many cases.

But my biggest problem with many of these experiments is that it takes a much too narrow definition of what peer review is. If you define it strictly as the process involved 3 parties (the author, the editor, and the blind reviewers), then yes these experiments show that it has failed. But peer review, in the broad sense, is much more decentralized than that. It is reputation of these researchers at large, and it is also the individual responsibility of researchers to be cautious about who and what they cite. It is an on-going process that doesn’t either start or stop with publication.

For example, if I’m going to make a controversial claim in print and I want it to be supported by previous literature, I’m probably not going to cite Nobody from Nowhere University, in the No-name Review of Anything Goes. I’m going to find reputable sources. Sometimes it might be enough to cite good authors, publishing material in lower tier open-access journal, or the other way around. Criteria for these things are largely tacit, and are most of the time subject to disagreement.

Also, double blind-review is only a first screening process. The real test comes with the passage of time, and whether the paper gets picked up and cited by other researchers. Whether it has an impact on the scientific community. Only after a while can you judge whether a paper was good, or important, or might be suited for what kind of citation.

Because of these reasons, the process of the double-blind review shouldn’t become a fetish. It serves a purpose, but it’s goal might be filled by other means. As such, terrible papers in peer reviewed journals, or supposedly peer reviewed, might not hurt “science” as whole, but mostly these journals themselves.

Ultimately, a more significant experiment would take a much, much longer time frame and study not only the double blind-review process, but also the article’s citation pattern and its impact at large. Those experiments, however, are sure to fail, and would demonstrate that blatantly bad research doesn’t get anywhere, even if it is published. If the focus is bad recruitment, than those studies should focus on recruitment, rather review processes at open-access journals as a distant proxy. And that is the problem with “Sokal experiments.”

Advertisements