In this proof-of-concept work, we have developed a 3D-CNN architecture that is guided by the tumor mask for classifying several patient-outcomes in breast cancer from the respective 3D dynamic contrast-enhanced MRI (DCE-MRI) images. The tumor masks on DCE-MRI images were generated using pre- and post-contrast images and validated by experienced radiologists. We show that our proposed mask-guided classification has a higher accuracy than that from either the full image without tumor masks (including background) or the masked voxels only. We have used two patient outcomes for this study: (1) recurrence of cancer after 5 years of imaging and (2) HER2 status, for comparing accuracies of different models. By looking at the activation maps, we conclude that an image-based prediction model using 3D-CNN could be improved by even a conservatively generated mask, rather than overly trusting an unguided, blind 3D-CNN. A blind CNN may classify accurately enough, while its attention may really be focused on a remote region within 3D images. On the other hand, only using a conservatively segmented region may not be as good for classification as using full images but forcing the model's attention toward the known regions of interest.
Keywords: Breast cancer outcome classification; DCE-MRI; Deep learning; Mask-guided convolutional neural net.
© 2021. Society for Imaging Informatics in Medicine.