Simulating clinical features on chest radiographs for medical image exploration and CNN explainability using a style-based generative adversarial autoencoder

使用基于风格的生成对抗自动编码器模拟胸部 X 光片上的临床特征,以进行医学图像探索和 CNN 可解释性

阅读:5
作者:Kyle A Hasenstab, Lewis Hahn, Nick Chao, Albert Hsiao

Abstract

Explainability of convolutional neural networks (CNNs) is integral for their adoption into radiological practice. Commonly used attribution methods localize image areas important for CNN prediction but do not characterize relevant imaging features underlying these areas, acting as a barrier to the adoption of CNNs for clinical use. We therefore propose Semantic Exploration and Explainability using a Style-based Generative Adversarial Autoencoder Network (SEE-GAAN), an explainability framework that uses latent space manipulation to generate a sequence of synthetic images that semantically visualizes how clinical and CNN features manifest within medical images. Visual analysis of changes in these sequences then facilitates the interpretation of features, thereby improving explainability. SEE-GAAN was first developed on a cohort of 26,664 chest radiographs across 15,409 patients from our institution. SEE-GAAN sequences were then generated across several clinical features and CNN predictions of NT-pro B-type natriuretic peptide (BNPP) as a proxy for acute heart failure. Radiological interpretations indicated SEE-GAAN sequences captured relevant changes in anatomical and pathological morphology associated with clinical and CNN predictions and clarified ambiguous areas highlighted by commonly used attribution methods. Our study demonstrates SEE-GAAN can facilitate our understanding of clinical features for imaging biomarker exploration and improve CNN transparency over commonly used explainability methods.

特别声明

1、本文转载旨在传播信息,不代表本网站观点,亦不对其内容的真实性承担责任。

2、其他媒体、网站或个人若从本网站转载使用,必须保留本网站注明的“来源”,并自行承担包括版权在内的相关法律责任。

3、如作者不希望本文被转载,或需洽谈转载稿费等事宜,请及时与本网站联系。

4、此外,如需投稿,也可通过邮箱info@biocloudy.com与我们取得联系。