Abstract
Chest radiographs (CXRs) are one of the most complicated, yet most frequently performed imaging examinations worldwide, and are susceptible to reporting discrepancies, especially in the detection of lung cancer (LC). The use of artificial intelligence (AI) as a decision-supporting tool for CXRs has been shown to aid earlier LC detection and reduce reporting discrepancies. Nevertheless, the use of AI within radiology is still in its infancy in terms of clinical implementation. There is a need for more pragmatic evidence to enhance key stakeholders' perceptions and understating of AI within clinical practice. Most AI studies focus on comparing sensitivity, specificity, and accuracy against a re-review of pre-selected images by radiologists, often paired with complex technical data and statistical analyses. Improved understanding and awareness of AI capabilities whilst learning from AI discrepancies could help optimize human-AI interactions and acceptance within clinical practice whilst informing future research. The aim of this educational review is to provide a visual portrayal of real-world examples of missed LC cases from within a retrospective AI study, explore the impact of image quality on AI performance, and highlight cases of AI discrepancies.