Abstract
BACKGROUND: The issues regarding the use of artificial intelligence (AI) and academic integrity are important contemporary topics. There are no clear regulations governing the use of AI in academic institutions in Ukraine. This study aimed to explore the perceptions of medical students, interns, and PhD candidates about academic misconduct and AI use. METHODS: The cross-sectional study was conducted online via survey in October-December 2024. The participants were medical students, interns, and medical PhD students of the Bogomolets National Medical University in Kyiv, Ukraine. RESULTS: Among 244 study participants, the majority (84%) reported using AI for academic purposes, with ChatGPT being the most commonly used tool. AI was primarily employed for information searches (70%), while a smaller proportion admitted using it for academic dishonesty, such as writing essays (14%) or submitting pre-written assignments (9%). Nearly half (51%) of participants reported having cheated on tests previously. Opinions on AI's impact on academic integrity were divided, with 36% considering AI use as misconduct, 26% perceiving it as acceptable, and 38% undecided. Most participants viewed AI as beneficial for learning and work, with 37% indicating they would continue using AI professionally. Perceived AI's advantages included time efficiency, enhanced learning, and accessibility, while concerns were raised about errors, lack of critical thinking, over-dependence, and ethical risks. CONCLUSION: These results highlight the widespread adoption of AI among medical students, interns, and PhD candidates for academic purposes, with both notable advantages and significant ethical and practical concerns. Considering the prevalent use of AI for academic work among medical students, interns and PhD students in Ukraine, national rules are needed to define which uses of AI constitute academic dishonesty and academic misconduct.