Abstract
Survey data collection can be administered in different modes, including face-to-face interviews and self-completion modes such as paper-and-pencil or web-based surveys. Do data collected in these different modes reliably differ across countries? We address this question using responses from 145,361 respondents in 29 countries to 46 questions in Rounds 8-10 of the European Social Survey (ESS), a large-scale social research project that conducts cross-national surveys in Europe and whose data has been used in thousands of publications. The ESS is typically administered in face-to-face interviews, but due to the COVID-19 pandemic data from nine countries in the tenth round were collected using self-completion methods. In line with previous findings demonstrating differences between administration modes, we show that machine-learning models can predict how surveys were administered, suggesting that data collected in the different modes are not comparable. More critically, we show that even when these models are trained on data from a set of countries, they can predict how surveys were administered in a completely novel country, which indicates that responses in different administration modes reliably differ across countries. Finally, we investigate extreme response styles as one difference in the response profiles of the two different modes. In addition to addressing concerns of data comparability in the ESS, these findings reveal that administration modes of surveys lead to reliable cross-national differences in response profiles.