Abstract
Artificial intelligence (AI) systems based on Large Language Model (LLM) are becoming an increasingly important aspect of biomedical research, assisting with the tasks ranging from research design to data analysis and publication. Although AI systems increase productivity by cutting the time taken for individual tasks, they also expose their users to severe risk due to systematic distortion of outputs due to algorithmic sycophancy. The honesty of these AI systems is questionable, and can effortlessly crumble when user prompts are incorrect or when the system is under pressure. This viewpoint emphasizes the fundamental understanding of algorithmic sycophancy and the potential mechanism underlying it, which leads to systematic distortion of biological research. There is an important need to bring this issue to light in order to prevent systematic distortion of biomedical research through the cautious utilization of these LLM-based AI systems. Understanding this threat can also help to minimize the propagation of unreliable findings and literature, which pose a significant safety risk to biomedical research as a whole.