Negative perception of the impact of AI on peer review | Neutral/no impact | Positive perception of the impact of AI on peer review |
LLMs are effectively sophisticated autocompleters. By definition and design, they provide the most likely completion to a given string of text. This is likely to be correct and useful where there is a large amount of consistent and correct data in the training set – for instance, asking an LLM to summarise a “textbook” problem or topic – but on the other hand it is likely to be biased or simply wrong on issues of cutting-edge research, such as a peer reviewer is asked to comment on. | I don’t know what impact it will have. It will be a big problem if people rely on AI to review papers – since AI is about moving toward the mean, we will lose creative cutting edge papers. But, if they just use it to write, or organize thoughts, or to find other similar papers, that might be OK. | The impact of generative AI tools on the peer review process can be beneficial when utilized as aids rather than substitutes. It’s crucial to recognize them as tools that augment our capabilities rather than as complete solutions. While these tools can streamline certain aspects of review, ensuring human oversight remains essential in maintaining the rigor and integrity of the process. |
A peer review means rigorous review by a researcher knowledgeable on the specific topic. Anyone can use ChatGPT, regardless of research experience. | AI can accelerate peer review by summarizing papers, checking references, but it might also reduce the diversity of perspectives. AI could act as a filter or first-pass reviewer. | AI can only be a positive influence on peer review. It can’t be used entirely but is good for checking what an experienced reviewer has said. |
The definition of peer review is that it is done by peers. AI is not an academic peer. AI can only draw from what is already in the LLM. Research should be making a new contribution. These 2 things are in conflict. | AI could be useful as a tool but not as a replacement. But I fear that many reviewers will use it as a replacement to thinking. | It can perfectly answer questions and is a tool that can drive technological progress. |
All papers prepared with the help of AI are very eloquent and immensely stupid. I consider AI as the profanation of true science. The mass media methods like AI are very dangerous infection able to keel any rational human thinking. | The impact of AI on peer review is neutral when reviewers engage sincerely with the manuscript and judiciously leverage AI as a supportive tool. | It will make the process faster, it will make it easier to understand the key concepts of an article saving valuable time and allowing for better focusing on what is novel and important to be checked thoroughly. |
Generative AI could make the entire process of producing new research reports devoid of new ideas and curiosity. I am not eager for a future where researchers use ChatGPT to create papers, and then reviewers have ChatGPT review them. It leads to a tautological cycle where LLMs produce work, and then analyze their own work. I am skeptical of the ability of LLMs to produce fundamentally new ideas, and think they are limited to whatever is contained in their training data. | Some reviewers will use LLMs to do the reviews for them, which is of course bad. Others will just use them to improve their writing, verify the novelty of the work at hand, and search for relevant references, which is good. I think the same also holds for the authors…. | Sometimes little details are overlooked by human, combined reviews by AI and humans have better insights. Also rephrasing the reviewer suggestions by AI tools can help the author better understand the points of improvement. |