Skip to main content

Your web browser is out of date. Please update it for greater security, speed and the best experience on this site.

Choose a different browser

State of peer review 2024

Results

Key findings

50%

of respondents said that the number of peer review requests they receive has increased in the last three years

47%

of respondents receive less than one peer review invitation per month

54%

of respondents feel that they receive the right number of peer review requests, similar to the figure in 2020

35%

of early-career researchers say they have more time for peer review compared with the number of requests they receive

52%

of respondents say they would prefer to review double-anonymous manuscripts compared with single-anonymous or open manuscripts

Time available for peer review

We asked respondents about the number of peer review requests that they receive and the time they have available for peer review.


In response to the question “How many requests to peer review individual manuscripts do you receive, on average, each month? This includes requests from all publishers.”, 46.9% of respondents said they received less than one request per month. The proportion of respondents reporting more than 11 peer review invitations per month was low, at around 3%.


Responses to the question "How many requests to peer review individual manuscripts do you receive, on average, each month?" This includes requests from all publishers.

There did not appear to be a significant difference in the volume of peer review requests received by male and female respondents. However, there were differences in the volume of peer review requests according to career stage. Respondents who work in industry were much more likely to report receiving less than one peer review invitation per month, and received far fewer requests overall compared with other groups. Faculty members, Associate Professors, and Professors (middle to senior-career researchers) reported receiving the highest number of peer review requests on average; however, 41% of researchers in this group still reported receiving less than one invitation per month.

When responses to this question were analysed by country and geography, the most significant difference was between Europe and the rest of the world, with 24% of respondents from European countries receiving 3 or more requests per month compared to 19% of respondents from the rest of the world. Similarly, fewer respondents from China and India received 3 or more requests per month (15% and 16% respectively, see Appendix).  

When responses were analysed by World Bank country income group, there were no major differences in the frequency of invitations received between respondents from high-income, upper middle-income, and lower middle-income countries; however, respondents from low-income countries reported receiving more peer review requests on average than the other groups (see Appendix). Low-income country respondents represented 1.7% of the responses received (53 individuals).

Responses to the question “In the last three years, do you feel that the number of peer review requests that you receive has gone up, gone down, or stayed roughly the same?”

Respondents were also asked “In the last three years, do you feel that the number of peer review requests that you receive has gone up, gone down, or stayed roughly the same?” Almost 50% of respondents reported that the number of requests that they receive has gone up, with only 11.5% reporting a decrease. There were no significant differences by gender or career stage (see Appendix), although those respondents who work in industry were significantly less likely to report an increase in requests (32%), and more likely to report a decrease in requests (26%). When the results were analysed by World Bank country income group, there were no differences between lower middle-income countries, upper middle-income countries or high-income countries, but low-income country respondents were much more likely to report an increase in requests (77%) and less likely to report a decrease in requests (6%).

The distribution of responses was similar for respondents from China, India, and the USA. Respondents from Europe were least likely to report a decrease in invitations and were more likely than respondents in China, India, and the USA to report an increase in invitations.

Responses to the question “What best describes the time you have available for peer review?” in both 2020 and 2024

In response to the question “What best describes the time you have available for peer review?”, 30% of respondents reported that they have more time available for peer review compared with the number of requests they receive, while 16% reported receiving too many requests. This is in contrast to the results of the 2020 IOP Publishing peer review survey, in which only 18% of respondents reported that they had more time available compared with the number of requests they receive, and 26% of respondents noted that they received too many requests. The overall proportion of respondents reporting that they receive the right number of requests was 54%, which is comparable to the figure of 56% in the 2020 report.


There were no significant differences by gender. When the responses to this question were broken down by career stage, Associate Professors or higher were more likely to report receiving too many requests (28% of respondents) compared with other groups such as PhD students (7%) or postdocs (9%, see Appendix).


When the responses were analysed by country and geographic region, there was a noticeable difference between respondents based in India or China and those based in the rest of the world (see Appendix). Respondents in China or India were significantly less likely to report receiving too many requests (6% and 7%, respectively), compared with respondents in other countries (23%). When the results were analysed by World Bank country income group, respondents in high-income countries were significantly more likely to report receiving too many requests compared with respondents in low and middle-income countries (30% vs 10%, see Appendix).

Bias in peer review

Responses to the question “Bias is defined as prejudice for or against one person or group, especially in a way considered to be unfair. Have you ever experienced what you perceive as bias in the peer review process?”

The majority of respondents (84%) did not feel that they had ever experienced bias in the peer review process. This was up from 76% in the 2020 survey. Of those who did report experiencing bias in peer review, the most common type of bias reported was geographical bias (6% in 2024 vs 8% in 2020), followed by subject area bias and institutional bias. The least common types of bias reported were gender, sexuality and disability. 0.6% of respondents reported experiencing gender bias, compared with 2% in the 2020 survey.

When responses to this question were analysed by World Bank income group, respondents from low-income countries were least likely to report having experienced bias, and the reported rates of bias increased in line with income group. High-income country respondents were most likely to report experiencing bias (see Appendix). 

Responses to the (optional) question “If you experienced bias in the peer review process, what did the bias relate to?”

Respondents who felt that they had experienced bias in the peer review process were invited to answer an optional free text question to give more details of their experience. Over 150 free text responses were received.


A strong theme in the comments was perceived geographical bias of reviewers against authors from specific global regions or countries. Many respondents noted that they felt they had been discriminated against, or had witnessed discrimination against, authors from certain regions. Other respondents gave credibility to these concerns by noting that they themselves were biased against certain regions. For example, one respondent commented, “I have to admit that I already assume a paper to be of low-quality if the author is based in [COUNTRIES REDACTED]”.


Another recurrent theme in the free text responses was bias on the part of editorial staff for or against reviewers from specific countries. Many respondents felt that their reviewer reports and recommendations had been disregarded or given less weight by editors because of their nationality. There was a feeling that editors listened less to the voices of reviewers from low- and middle-income countries compared with higher-income countries. There was also a feeling that editorial teams predominantly based in high-income countries were biased towards authors from the same countries: “Authors close to editors have privileged conversation vs. isolated authors in developing countries”.


Other forms of bias highlighted by respondents included bias against early-career researcher authors (by both reviewers and editors), bias for or against certain institutions, and bias against particular methodologies or scientific concepts.


Several respondents also claimed that their manuscripts had been purposely held up or delayed by reviewers who were working in the same field and wished to publish their own research first.

Models of peer review

Double-anonymous peer review is a model of peer review in which the reviewers are not aware of the identity of the authors of a manuscript while the manuscript is under review. The predominant model of peer review in the physical sciences remains single-anonymous peer review, in which reviewers are aware of the identities of the manuscript authors. Open review is a model of peer review in which all parties are aware of each other’s identities throughout the peer review process. We asked respondents a series of questions relating to their perceptions of different models of peer review.

Responses to the question “Have you ever taken part in peer review of a double-anonymous manuscript (either as a reviewer or as an author)?”

Responses to the questions “As an author of a manuscript, which of the following models of peer review would you prefer?” and “As a peer reviewer of a manuscript, which of the following models of peer review would you prefer?”

When respondents were asked which model of peer review they preferred when they are an author and when they are a peer reviewer, double-anonymous was the most popular response (over 50%) in both cases. There was a small discrepancy between the two responses, with double-anonymous and open review scoring higher for the author question and single-anonymous scoring higher for the reviewer question.

Motivations for reviewing

We wanted to better understand what motivated reviewers to accept a review invitation, and how these motivations might have changed since the last survey in 2020.

Responses to the question “When you receive an invitation to review a manuscript, how important are these factors in motivating you to accept the invitation?”

As in the 2020 survey, the biggest motivators were related to interest in the paper and the reputation of the journal. The least motivating factor for respondents was in-kind benefits or cash, although the average score for this factor had increased since 2020.

Respondents were asked which factors made them more likely to decline an invitation to review, and were invited to submit their responses as free text. The most common answers were:

  • Lack of availability/being too busy
  • Manuscript outside of field of expertise
  • Poor quality abstracts/typographical or grammatical errors in the abstract or title
  • Perceptions of the journal/publisher
  • Conflicts of interest

The survey included an option for respondents to tell us whether a journal or publisher had ever provided them with a negative reviewing experience that put them off reviewing in the future. Responses to this question were free-text and over 500 responses were received.

Most respondents said that they had not had a negative experience that put them off reviewing in future. Other respondents detailed their negative experiences, which included:

  • Spending a lot of time and effort on a high-quality review for a journal, only to not receive any other reviewer invitations for that journal.
  • Dissatisfaction with reviewing within the double-anonymous model (not knowing the identity of the manuscript authors).
  • Frustration that they have provided peer review reports for a journal, but the same journal did not send their own work out for peer review.
  • Having a poor experience as an author for a journal (for example, due to long peer review times) which has then put them off reviewing for that journal.

By far the most common complaint was feeling that their reports and recommendations had been ignored by editorial staff, with manuscripts accepted for publication against the reviewer’s recommendation.


The responses to this question contained lots of references to specific publishers for which the respondents felt that they did not want to review because of perceptions around the fairness and rigour of the peer review systems in place.


We asked respondents what rewards or recognition they value when it comes to peer review. In terms of feedback, respondents noted that feedback on the final decision on a manuscript was valuable to them, as well as feedback on the quality of their review. These results were very similar to the results of the 2020 survey.

Responses to the question “What rewards or recognition do you value for reviewing manuscripts? Please assess each of the points below on a scale of 1 to 5, where 1 = not valuable and 5 = extremely valuable.” (Feedback)

When asked what rewards or recognition they valued for reviewing manuscripts, the highest scoring responses all related to feedback, with feedback on the final decision of the paper ranking first, followed by feedback on the quality of the review, and access to other reviewers’ comments.


The lowest scoring response was to be named as the reviewer on the published article.


We asked the same question in our 2020 survey, and when results from 2024 and 2020 were compared, “Certificate/badge for passing a peer review training programme”, “Annual journal-level reviewer awards” and “Discount/waiver on APCs” all appeared to significantly increase in importance since the 2020 survey. “Feedback on final decision on paper”, while still the highest scoring response, had decreased in importance compared with 2020 (see Appendix).

Responses to the question “Please indicate what kind of positive impact each of the following initiatives would have on your experience of the peer review process, where 1 = no positive impact and 5 = overwhelmingly positive impact.”

Respondents were asked to evaluate types of innovations and initiatives in terms of the impact they might have on the peer review process. The highest-ranking response was “Improvements to online manuscript and review submission systems” followed by “More communication between authors, reviewers, and editors” and “Better and more accessible peer review training”. The lowest-ranking response was “Publishing manuscripts on pre-print servers before peer review”.


When comparing responses from 2024 and 2020, the most noticeable change was a significant increase in the ranking of “Better and more accessible peer review training”, with all other responses remaining broadly similar.

Generative AI and peer review

Responses to the question “In your opinion, what impact will open-source generative AI tools, such as ChatGPT, have on the peer review process?”

Roughly 35% of respondents thought that generative AI tools would have a negative impact on the peer review process. 36% were neutral or thought that generative AI would have no impact, while 29% thought it would have a positive impact.


Respondents were given the option to elaborate on what they thought the impact of generative AI on the peer review process might be in a free-text response. Responses were extremely diverse. For example, one respondent claimed: “AI can be used for reviewing a paper, but its attitudes must be checked by the reviewer individually”, while several others expressed concerns about generative AI being used to write peer review reports, noting that current open-source generative AI models were not capable of accurate scientific critique.


Several respondents suggested that AI could help to check manuscripts for plagiarism and English language quality, thereby filtering out problematic or low-quality manuscripts before they go out for peer review.


Other respondents had wholly negative attitudes towards AI, with responses including: “AI is evil, burn it with fire!” and “AI is so far very unintelligent…it could NEVER answer my questions properly online. It is very politically in line with their owners’ positions. They are a tool of corrupting human moral standard.”


Other comments included: “AI is a destructive tool for mankind. [It] should be completely banned in academia and research. In industry, however, AI is useful.” And “Could be useful when used to improve language, could be very harmful when used to generate peer review (or article)”.


The most common response to this question was that generative AI tools can provide some useful outputs, but expert human verification and editing is always required before any AI-generated text is used in the peer review process.

View next page: Discussion