
Abstract: The shortage of large-scale expert annotated chest X-ray datasets poses a challenge for building high precision abnormality detection models. Weakly-supervised learning (WSL) methods show significant promise to overcome this problem by leveraging information from freely available radiology reports. However, most of these methods only use image-level pathological findings, failing to utilize the information of relevant anatomy that plays an important role in radiologists' reporting process. In addition, weak labels extracted from reports are often sparse and noisy, and using a naive imputation strategy (i.e., equating \textit{no mention} to \textit{negative}) may degrade model's performance. To address these issues, we propose a novel WSL framework, anatomy-guided chest X-ray network (AGXNet), that learns the features of both radiological observations and the relevant anatomical landmarks. The key component in our framework is an anatomy-guided attention module that regularizes the feature maps learned from both anatomy and observation encoders having consistent location of abnormality. We adopt a PU learning technique to iteratively improve weak labels' qualities during training. Our quantitative and qualitative results on the MIMIC-CXR dataset demonstrate the effectiveness of AGXNet in both disease and anatomic abnormality localization. Experiments on the NIH Chest X-ray dataset demonstrate that the learned image representations are transferable and outperform the baselines in both classification and localization tasks.
Bio: Shantanu Ghosh is a PhD student in the Intelligent Systems Program. His research interests include computer vision, causal inference, and deep learning.
RSVP for more Zoom information: https://pitt.co1.qualtrics.com/jfe/form/SV_4THGjnpJlBm5wtU
Friday, April 22 at 1:00 p.m. to 1:30 p.m.
Virtual EventAbstract: The shortage of large-scale expert annotated chest X-ray datasets poses a challenge for building high precision abnormality detection models. Weakly-supervised learning (WSL) methods show significant promise to overcome this problem by leveraging information from freely available radiology reports. However, most of these methods only use image-level pathological findings, failing to utilize the information of relevant anatomy that plays an important role in radiologists' reporting process. In addition, weak labels extracted from reports are often sparse and noisy, and using a naive imputation strategy (i.e., equating \textit{no mention} to \textit{negative}) may degrade model's performance. To address these issues, we propose a novel WSL framework, anatomy-guided chest X-ray network (AGXNet), that learns the features of both radiological observations and the relevant anatomical landmarks. The key component in our framework is an anatomy-guided attention module that regularizes the feature maps learned from both anatomy and observation encoders having consistent location of abnormality. We adopt a PU learning technique to iteratively improve weak labels' qualities during training. Our quantitative and qualitative results on the MIMIC-CXR dataset demonstrate the effectiveness of AGXNet in both disease and anatomic abnormality localization. Experiments on the NIH Chest X-ray dataset demonstrate that the learned image representations are transferable and outperform the baselines in both classification and localization tasks.
Bio: Shantanu Ghosh is a PhD student in the Intelligent Systems Program. His research interests include computer vision, causal inference, and deep learning.
RSVP for more Zoom information: https://pitt.co1.qualtrics.com/jfe/form/SV_4THGjnpJlBm5wtU
Friday, April 22 at 1:00 p.m. to 1:30 p.m.
Virtual Event