Abstract
Artificial intelligence (AI) is increasingly integrated into audiology and hearing health, yet evidence from across the health sciences shows that AI systems routinely embed structural biases that can exacerbate inequities, particularly for African and other low- and middle-income country (LMIC) populations. This review identified and analysed bias types in AI applications relevant to audiology and examined their ethical, cultural, and linguistic implications for LMIC settings. A narrative review design was adopted to accommodate the heterogeneity of available evidence, where thematic saturation was more appropriate than effect-size aggregation. Peer-reviewed articles published between 2015 and 2025 were retrieved from PubMed, Scopus, Web of Science, and IEEE Xplore, with inclusion requiring explicit engagement with AI and bias or equity. Rigour was assessed using a six-domain quality rubric, and data were extracted into structured evidence tables for thematic synthesis. Thirty-three studies met inclusion criteria: six were audiology-specific empirical studies (all small scale), and the remainder were reviews or conceptual analyses. No study presented empirical African audiogram, auditory brainstem response (ABR), or speech data. Six recurrent bias types were identified; representation, measurement, algorithmic, evaluation, deployment, and intersectional, with representation bias most frequent, exemplified by English-only corpora that underperform on tonal or indigenous languages. These biases manifest as misclassified hearing loss, reduced ABR accuracy, inequitable hearing-aid personalisation, and poor cochlear-implant algorithm transferability. Advancing equitable AI in audiology requires multilingual, paediatric-inclusive, locally governed datasets; fairness-aware model design with stratified reporting; and African-led governance and capacity-building to support future validation and implementation research.