What does peer review mean when applied to computer code?
A new experiment by the Mozilla Science Lab seeks to explore the interface between software engineers’ code review and the peer review of scientific articles that include code.
At a time when the use of computational methods in research is becoming ever more widespread and necessary, increasing numbers of research articles include small or large pieces of code used to manipulate the data. Some of the people producing this code are trained computational scientists, while others are biologists who have picked up coding ‘along the way’. At present, some journals (including PLOS Computational Biology) publish ‘Software’ articles, whose main aim is to make available a useful piece of software, and in these articles the software is carefully peer-reviewed. But in the majority of articles that include some smaller piece of code which is not the focus of the research, the peer review of the code may be cursory at best. We don’t know in any formal way whether this causes any problems down the line, when others try to replicate or build on published work. Anecdotally, there are sometimes difficulties in building on published code (even when it is fully available). So, should there be more formal review of code? And if so, how should it be approached?
In an experiment beginning this month, the Mozilla Science Lab will perform a trial series of software reviews. Over the month of August, a set of volunteer Mozilla engineers will review snippets of code from previously published PLOS Computational Biology papers, treating it as they would any other piece of code in their professional life. The Science Lab will be approaching authors of the papers after the initial review is complete, to offer them the opportunity to participate in a a dialogue about the process.The reviews will hopefully show:
- How much scientific software can be reviewed by non-specialists, and how often is domain expertise required?
- How much effort does this take compared to reviews of other kinds of software, and to reviews of papers themselves?
- How useful do scientists find these reviews?
The Science Lab will publish the results of this experiment in an anonymous summary form, the reviews not affecting the status of the publication. We encourage authors to make use of the reviews they receive, perhaps using our post-publication commenting feature, but for this experiment authors are under no obligation to follow up with the journal after receiving the review of their article from Mozilla. You can find out more in Kaitlin Thaney’s (Director of the Mozilla Science Lab) post here.
We look forward to learning from the results so we can improve the review process for scientific code.
I direct an NSF research coordination network, the Network for Computational Modeling in Social and Ecological Sciences (CoMSES Net). One of our initiatives is to develop a model code library and to establish procedures and best practices for peer evaluation of this code. You might want to take a look at this (website above) and let me know if you’d like to coordinate on this.
Michael Barton
[…] “What does peer review mean when applied to computer code?” https://blogs.staging.plos.org/biologue/… […]
[…] Labs and Greg Wilson from Software Carpentry and announced by posts on Kaitlin’s blog and on PLOS Biologue. The full results have now been released on arXiv and summarised on the Mozilla Science blog, which […]
[…] Biologue (Staff blog) What does peer review mean when applied to computer code? […]
[…] Biologue (Staff blog) What does peer review mean when applied to computer code? […]
[…] Biologue (Staff blog) What does peer review mean when applied to computer code? […]