Replication & Open Scholarship
One of my favourite blog sites is Retraction Watch. It's a little like watching Border Security: Australia's Front Line, a show that
takes viewers behind the scenes of Australia's Customs, Immigration and Quarantine departmentsWatching folks trying to sneak past people intent on catching smuggled contraband has a morbid fascination, the driver of reality TV around the world. But when it intrudes on your professional world it starts to look a little bit different.
NY Times article (Published: April 16, 2012) on the rise of academic fraud in the scientific publishing community. The 'currency' of academia sure isn't in denominations of fiscal exchange (with the minor exception of grant funding featuring ever more prominently in academic staff rankings as more meaningful metrics of contribution remain elusive). The real currency of the academy is reputation.
Reputation is established by peer-reviewed publication in journals that exert care and critical attention to the quality, process, and conclusions presented for publication to the scholarly community. That process of carefully curated, critically reviewed sharing of research outcomes is under siege from both within and without. From within by the challenges of sustaining quality procedures that insure the highest calibre of writing, methodology, and presentation of accurate, reproducible results. And from without by the rapidly changing means of communication, production, dissemination and sharing of the outputs from this process.
The troubling reality is publication fraudulent research reports is on the rise. That could be for a number of reasons - greater care in the review process catching things that before might have slipped through; increasing sloppiness in that process allowing things that shouldn't pass into 'print' (used loosely for distribution in various media formats); increasing pressure on researches to get "their data" published in the pursuit of increasingly scarce research dollars; and increasing numbers of researchers who are unethical (whether consciously aware of it or not) or just plain deceitful.
The latest chapter in this concerns the failure to replicate published findings. One of the hypotheses is that the incentives to publish predispose researchers to take liberties that influence how they report what they did. The result of this is attempts to follow published methodologies result in different outcomes. Why? Because what really happened wasn't quite what was written up in the published paper. That adds a five reason why retractions have gone up, and another category for the list of fraudulent practices.
The Reducibility Project is setting out to test this prospect in papers published in the psychological sciences. The goal of the project is to
...estimate the reproducibility of a sample of studies from the scientific literature. The project is a large-scale, open collaboration involving dozens of scientists from around the world. The investigation is currently sampling from the 2008 issues of three prominent psychology journals - Journal of Personality and Social Psychology, Psychological Science, and Journal of Experimental Psychology: Learning, Memory, and CognitionThere are many reasons to follow this project. We assume that things making it through the peer review process by qualified reviewers reflect careful efforts to reveal how things in nature work. If the outcomes of this publishing process don't reliably accomplish this we're in a heap of trouble. Not only is the primary set of metrics for career reward and advancement now suspect, but our understanding of the world around us must be, as well. And the system has a built in bias that may contribute significantly toward this concern - unwillingness to publish failures to reject the null hypothesis.
Granted not every experiment can be replicated. Large longitudinal clinical trials for one. They're too expensive, take too much time, and are conducted under conditions that just can't be reproduced exactly even if you had time and money. But many others can and aren't. Journals don't like them. They prefer the shiny new factoid or the innovative discovery. Journals are run by people, too.
But the vast number of experiments do result in failures to reject the null hypotheses. And we're blind to them. That means not only are we doomed repeat the past - how do we know it IS the past if we don't share this information? - but we're unable to see exactly the context for experiments that do produce rejected null hypotheses.
The early indications aren't very promising. Another project in the psychological sciences has been working at the replication of experiments for a year now. PsychFileDrawer has tried to replicate 9 studies and succeeded at three. Not a promising start.
-- pdl --