Session Deep Learning

Don’t miss the complete programme of VCBM 2018!

Thursday 20th september, 17:00-17:45. Room Andalucia I

Uncertainty-Guided Semi-Automated Editing of CNN-based Retinal Layer Segmentations in Optical Coherence Tomography
Full paper

Shekoufeh Gorgi Zadeh, Maximilian Wintergerst, Thomas Schultz

Convolutional neural networks (CNNs) have enabled dramatic improvements in the accuracy of automated medical image segmentation. Despite this, in many cases, results are still not reliable enough to be trusted «blindly». Consequently, a human rater is responsible to check correctness of the final result and needs to be able to correct any segmentation errors that he or she might notice. For a particular use case, segmentation of the retinal pigment epithelium (RPE) and bruch’s membrane (BM) from optical coherence tomography (OCT), we develop a system that makes this process more efficient by guiding the rater to segmentations that are most likely to require attention from a human expert, and by developing semi-automated tools for segmentation correction that exploit intermediate representations from the CNN. We demonstrate that our automated ranking of segmentation uncertainty correlates well with a manual assessment of segmentation quality, and with distance to a ground truth segmentation. We also show that, when used together, uncertainty guidance and our semi-automated editing tools decrease the time required for segmentation correction by more than a factor of three.

Respond-CAM: Analyzing Deep Models for 3D Imaging Data by Visualizations
MICCAI invited talk

Guannan Zhao, Bo Zhou, Kaiwen Wang, Rui Jiang, Min Xu

The convolutional neural network (CNN) has become a powerful tool for various biomedical image analysis tasks, but there is a lack of visual explanation for the machinery of CNNs. In this paper, we present a novel algorithm, Respond-weighted Class Activation Mapping (Respond-CAM), for making CNN-based models interpretable by visualizing input regions that are important for predictions, especially for biomedical 3D imaging data inputs. Our method uses the gradients of any target concept (e.g. the score of target class) that flows into a convolutional layer. The weighted feature maps are combined to produce a heatmap that highlights the important regions in the image for predicting the target concept. We prove a preferable sum-to-score property of the Respond-CAM and verify its significant improvement on 3D images from the current state-of-the-art approach. Our tests on Cellular Electron Cryo-Tomography 3D images show that Respond-CAM achieves superior performance on visualizing the CNNs with 3D biomedical images inputs, and is able to get reasonably good results on visualizing the CNNs with natural image inputs. The Respond-CAM is an efficient and reliable approach for visualizing the CNN machinery, and is applicable to a wide variety of CNN model families and image analysis tasks.