Empowering medical students with AI writing co-pilots: design and validation of AI self-assessment toolkit

利用人工智能写作助手赋能医学生:人工智能自我评估工具包的设计与验证

阅读:1

Abstract

BACKGROUND AND OBJECTIVES: Assessing and improving academic writing skills is a crucial component of higher education. To support students in this endeavor, a comprehensive self-assessment toolkit was developed to provide personalized feedback and guide their writing improvement. The current study aimed to rigorously evaluate the validity and reliability of this academic writing self-assessment toolkit. METHODS: The development and validation of the academic writing self-assessment toolkit involved several key steps. First, a thorough review of the literature was conducted to identify the essential criteria for authentic assessment. Next, an analysis of medical students' reflection papers was undertaken to gain insights into their experiences using AI-powered tools for writing feedback. Based on these initial steps, a preliminary version of the self-assessment toolkit was devised. An expert focus group discussion was then convened to refine the questions and content of the toolkit. To assess content validity, the toolkit was evaluated by a panel of 22 medical student participants. They were asked to review each item and provide feedback on the relevance and comprehensiveness of the toolkit for evaluating academic writing skills. Face validity was also examined, with the students assessing the clarity, wording, and appropriateness of the toolkit items. RESULTS: The content validity evaluation revealed that 95% of the toolkit items were rated as highly relevant, and 88% were deemed comprehensive in assessing key aspects of academic writing. Minor wording changes were suggested by the students to enhance clarity and interpretability. The face validity assessment found that 92% of the items were rated as unambiguous, with 90% considered appropriate and relevant for self-assessment. Feedback from the students led to the refinement of a few items to improve their clarity in the context of the Persian language. The robust reliability testing demonstrated the consistency and stability of the academic writing self-assessment toolkit in measuring students' writing skills over time. CONCLUSION: The comprehensive evaluation process has established the academic writing self-assessment toolkit as a robust and credible instrument for supporting students' writing improvement. The toolkit's strong psychometric properties and user-centered design make it a valuable resource for enhancing academic writing skills in higher education.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。