Abstract
The proliferation of generative artificial intelligence (AI) tools has fundamentally challenged traditional written assessments across higher education, with particular implications for laboratory-based disciplines where written work may substitute for demonstration of practical competence, necessitating approaches that prioritise direct performance. This study presents the adaptation of objective structured clinical examination (OSCE) methodology from medical education to laboratory biosciences, demonstrating a practical framework for authentic assessment in the AI era. We describe and evaluate the transformation of a microscopy assessment in FHEQ Level 4 Biomedical Sciences from a traditional laboratory report to a 20-minute OSCE-style practical evaluation. The redesigned assessment maintained grade distributions while eliminating AI vulnerability through real-time performance demonstration and conversational examination. The implementation achieved close alignment between learning outcomes and assessment methods while providing inherent resistance to generative AI exploitation through direct performance requirements. Equity implications are complex and context-dependent, with potential barriers for students with communication differences alongside potential benefits for others, such as those with written communication difficulties, emphasising the importance of balanced assessment portfolios and appropriate reasonable adjustments. The cross-disciplinary adaptation demonstrates that OSCE methodology offers a scalable solution to AI-era assessment challenges, with performance-focused design maintaining academic integrity more effectively than restrictive policies while enhancing authenticity and equity outcomes.