Abstract
Many studies have investigated anchoring effects. Anchoring occurs when initial values are used by humans as starting points in assessments. We investigated the prevalence of anchoring effects in the quality assessments of scientific papers. This study, which is preregistered, is a follow-up study that is intended to answer open questions from a previous study with the same topic. One open question concerns causal conclusions: it is necessary that randomly selected respondents assess the same paper under different conditions. In a survey, we asked corresponding authors to assess the quality of articles they have cited in previous papers. The respondents were randomly assigned to several experimental groups receiving numerical anchors such as citation counts or numerical access codes to the questionnaire. Although our results reveal scarcely effects of citation counts presented to the respondents as possible anchors, there is a small, but statistically significant effect of the random number (the numerical access code) presented to the respondents. Similar to other studies that have investigated the existence of anchoring effects in assessments in various contexts, our study could demonstrate the existence of an anchoring effect in research evaluation. Researchers seem to be influenced by numbers without any relationship to the quality of the evaluated paper in their assessment of papers.