Dmitri Khodjakov, Supervisor of the U.S. Peer Review Systems team, was invited to present at the COPE North America Seminar on August 13, 2014 as part of a panel on plagiarism-checking software.
Dmitri was joined on the panel by Charon Pierson, Editor-in-Chief of the Journal of the American Association of Nurse Practitioners and COPE council member, and Jason Roberts, Senior Partner at Origin Editorial. All three panelists, as well as many of the attendees, have experience setting up and running CrossCheck on peer-review systems and, as such, had a lot of practical experience to draw on when discussing the set-up of policies and procedures when using similarity-checking software.
The issues discussed ranged from the costs and benefits of automated similarity-checking on all submitted manuscripts, the threshold at which a similarity report calls for more investigation, and the appropriate actions to take when a similarity report indicates possible plagiarism. The speakers agreed that, while automated plagiarism checking is a very powerful tool for detecting unethical behavior, analyzing the reports still tends to be a labor-intensive process. Taylor & Francis has found that an average report takes fifteen minutes to check, but this can take longer when a report is more complex, such as reports that show significant similarity to more than one source.
During the panel, it was suggested that for larger publications with many submissions, checking papers for similarity randomly or on a rotating schedule may be potential options if the editors don’t have time to run CrossCheck on every paper. Checking on acceptance is also a potential solution if a journal receives a very large number of papers, allowing everything that has been selected for publication to be screened. Using the percentage score in the similarity reports as a guideline was considered somewhat useful by some attendees, while Dmitri and others mentioned that they disregard the percentage score entirely, and look only at the specific overlap to judge the extent and severity of the match. The percent overlap at which a similarity score warrants more investigation can vary from one field to another and gives no insight into exactly which text has been matched within the paper, which means it is not a reliable measure – Taylor & Francis recommends that every report should be checked no matter what the similarity score is.
Attendees agreed that when dealing with cases of academic misconduct or a breach of publishing ethics, the Committee on Publication Ethics (COPE) provides helpful resources, including guidelines and flowcharts to guide editors and publishers in the appropriate actions that should be taken when similarity-checking software indicates a possible ethical breach. Taylor & Francis editors should follow the COPE guidelines and contact their Managing Editor immediately if there are any concerns about a breach of publishing ethics in their journal.
The panel generated a lot of interesting discussion and was a great chance to talk with publishers and editors about their experiences. If you would like more information on CrossCheck or COPE, please speak to your Taylor & Francis Managing Editor who will be happy to discuss this further with you.