Lessons that peer review can learn from the mystery shopping industry

As a financial sideline in these days of rising costs and frozen pay, I have recently been carrying out occasional mystery shopping assignments to offset the costs of my social life. If you don’t know what mystery shopping is, imagine being paid to visit a pub or coffee shop, observe what’s on sale and the overall standard of customer service, and to write a short, highly structured report within 24 hours of your return. It’s fairly straightforward for academics to do, as we are used to observing things closely, assessing them according to strict criteria, and writing reports in beautiful English afterwards. This has made me a pretty accomplished mystery shopper in a very short space of time, and the fact that I am being paid to drink half pints of excellent ale or eat sandwiches I would be buying anyway makes it even more attractive as a pastime.
One thing that has struck me about the whole process of mystery shopping is how streamlined and effective it is compared to the peer review process in academia. In essence, we all take for granted the fact that once every few months, we will send off a scholarly article into the ether, for two or three peer reviewers to look at, in the hope that they will look upon it favourably, and it will end up being published. Even if it’s not published, we look forward to some constructive criticism that will allow us to get better at framing our theories intellectually, and reporting our findings in a way that will allow others to build on our work. Sometimes we might even spend a month of our lives working on a funding proposal, for example, and hope that it will leap out the reviewer’s pile and end up bringing a new, fruitful line of enquiry to life.
Except this is not always the case.
The escalation in the number of journal articles and funding applications submitted means that due diligence is not always given to submissions by reviewers. I am sure any academic reading this can think of the usual suspects. The reviewer who writes four or five lines in haphazard English, knocking off a factually incorrect review at the end of the day after spending ten minutes reading an extensive  funding proposal. The reviewer who marks anyone down who has not cited the reviewer’s own publications. The reviewer who writes aggressively, dismissing the author’s work at an almost personal level. The reviewer who disappears from view and holds up publications for six months or longer. The prestigious reviewer who gives their name to the review of a large funding proposal, but in fact gets the rookie research associate to do it for him/her. I could go on, but you get my drift.
It occurs to me that, as academics, we get the review system we deserve, and to that end, I would like to propose introducing a ranking scheme for reviewers, so that we can assess how seriously and professionally they are taking the task. Only those reviewers who score highly over a number of years in the system should be able to become journal editors, or take senior research council positions. Points should be awarded for constructive, professional behaviour at each stage of the review process, so that excellent and poor reviewers can be quickly identified. I propose a structure in the table below.
[Lest there be outcry, allow me to introduce an element of balance. I feel I should also acknowledge that many reviewers take the role extremely seriously, and are both temperate and intellectually generous in equal measure. For this, I salute you].
Now onto the proposed table of points to be awarded.
Stage of review Points awarded
When you complete a scholarly journal article review + 10
Bonus for completing a scholarly journal article review by the editor’s deadline + 2
When you complete a follow-up review + 2
Bonus for completing a follow-up review by the editor’s deadline + 2
For completing a book review + 20
If the editor needs to amend one of your reviews, or point out a factual error (for example if you have stated a key reference is omitted when it is in fact present in the text) – 5 for every amendment
If you submit your review after the editor’s deadline – 2 and a further – 2 for each additional week
If your review is negative or aggressive in tone – 10 for every negative or aggressive comment
If you drop out of doing a review more than 24 hours after it has been accepted – 2
If the editor is forced to reassign a review because the reviewers doesn’t respond – 5
If a review needs to be returned for extensive correction, as the result of multiple queries, factual errors, or reviews with numerous grammatical or arithmetical errors – 10
If the review is judged by the editor as being unhelpful to the author in terms of collaboratively improving the quality of published work in the field by the author and others – 5
If a correction or apology has to be published in a subsequent journal as a result of a factual error in your reviewing process – 50
If the editor is unable to contact you following the receipt of your review – 1 for every 2 attempts(including emails)
  • Higher stakes reviews (for example for large funding bids) should only be carried out by reviewers with a certain average score.
  • Point losses should be closely monitored, and training should be provided to reviewers where necessary in such cases. Reviewers who improve their performance should be taken out of the monitoring system, but failure to respond to the additional training will lead to the right to review articles and funding proposals being withdrawn.
  • A reviewer scoring more than 10 points for a job will become Active in the system. If they fail to score 10 or more they will remain at Probationary status.
  • Any reviewer scoring less than -5 points for a job will automatically revert to Probationary status.
  • A large number of negative scores should result in the reviewer being prevented from being given reviewing assignments for a set period of time, for example a couple of years.
Do you think it would work?  I would be very interested to read comments in relation to this proposal, and discuss them here.
Advertisements

1 thought on “Lessons that peer review can learn from the mystery shopping industry”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s