Unsupervised Bayesian generation of synthetic CT from CBCT using patient-specific score-based prior

Med Phys. 2024 Dec 12. doi: 10.1002/mp.17572. Online ahead of print.

Abstract

Background: Cone-beam computed tomography (CBCT) scans, performed fractionally (e.g., daily or weekly), are widely utilized for patient alignment in the image-guided radiotherapy (IGRT) process, thereby making it a potential imaging modality for the implementation of adaptive radiotherapy (ART) protocols. Nonetheless, significant artifacts and incorrect Hounsfield unit (HU) values hinder their application in quantitative tasks such as target and organ segmentations and dose calculation. Therefore, acquiring CT-quality images from the CBCT scans is essential to implement online ART in clinical settings.

Purpose: This work aims to develop an unsupervised learning method using the patient-specific diffusion model for CBCT-based synthetic CT (sCT) generation to improve the image quality of CBCT.

Methods: The proposed method is in an unsupervised framework that utilizes a patient-specific score-based model as the image prior alongside a customized total variation (TV) regularization to enforce coherence across different transverse slices. The score-based model is unconditionally trained using the same patient's planning CT (pCT) images to characterize the manifold of CT-quality images and capture the unique anatomical information of the specific patient. The efficacy of the proposed method was assessed on images from anatomical sites including head and neck (H&N) cancer, pancreatic cancer, and lung cancer. The performance of the proposed CBCT correction method was evaluated using quantitative metrics, including mean absolute error (MAE), non-uniformity (NU), and structural similarity index measure (SSIM). Additionally, the proposed algorithm was benchmarked against other unsupervised learning-based CBCT correction algorithms.

Results: The proposed method significantly reduced various kinds of CBCT artifacts in the studies of H&N, pancreatic, and lung cancer patients. In the lung stereotactic body radiation therapy (SBRT) patient study, the MAE, NU, and SSIM were improved from 47 HU, 45 HU, and 0.58 in the original CBCT images to 13 HU, 14 dB, and 0.67 in the generated sCT images. Compared to other unsupervised learning-based algorithms, the proposed method demonstrated superior performance in artifact reduction.

Conclusions: The proposed unsupervised method can generate sCT from CBCT with reduced artifacts and precise HU values, enabling CBCT-guided segmentation and replanning for online ART.

Keywords: CBCT; diffusion model; synthetic CT; unsupervised learning.