Adversarial robustness improvement for X-ray bone segmentation using synthetic data created from computed tomography scans

Sci Rep. 2024 Oct 28;14(1):25813. doi: 10.1038/s41598-024-73363-2.

Abstract

Deep learning-based image analysis offers great potential in clinical practice. However, it faces mainly two challenges: scarcity of large-scale annotated clinical data for training and susceptibility to adversarial data in inference. As an example, an artificial intelligence (AI) system could check patient positioning, by segmenting and evaluating relative positions of anatomical structures in medical images. Nevertheless, data to train such AI system might be highly imbalanced with mostly well-positioned images being available. Thus, we propose the use of synthetic X-ray images and annotation masks forward projected from 3D photon-counting CT volumes to create realistic non-optimally positioned X-ray images for training. An open-source model (TotalSegmentator) was used to annotate the clavicles in 3D CT volumes. We evaluated model robustness with respect to the internal (simulated) patient rotation α on real-data-trained models and real&synthetic-data-trained models. Our results showed that real&synthetic- data-trained models have Dice score percentage improvements of 3% to 15% across different α groups compared to the real-data-trained model. Therefore, we demonstrated that synthetic data could be supplementary used to train and enrich heavily underrepresented conditions to increase model robustness.

Keywords: Adversarial Training; Computed Tomography; Robustness; Segmentation; Synthetic X-ray.

MeSH terms

  • Artificial Intelligence
  • Bone and Bones / diagnostic imaging
  • Deep Learning*
  • Humans
  • Image Processing, Computer-Assisted / methods
  • Imaging, Three-Dimensional / methods
  • Tomography, X-Ray Computed* / methods