Purpose: Artifacts are known to reduce the quality of CT images and can affect statistical analysis and quantitative utility of those images. Motion artifact is a leading type of CT artifact, due to either incidental or involuntary (respiratory and cardiac movements) causes. Currently, such artifacts, if present, are not quantified and monitored, nor are their dependencies on CT acquisition settings known. As a first step to address this gap, the aim of this study was to develop a neural network to detect and quantify motion artifacts in CT images.
Methods: Training data were drawn from three sources: (1) publicly available data from RSNA and SPIE, (2) clinical cases from our institution identified by expert radiologists as containing motion, and (3) a realistic CT simulator using anthropomorphic computational phantoms (DukeSim, XCAT, Duke University). The pixels containing motion were segmented (Seg3D, University of Utah) and the segmentation masks used as the ground truth labels. A convolutional neural network (U-NET) was trained to identify pixels containing motion. The model performance was assessed by correlating the percentage of voxels labeled as having motion in each slice of the pre-allocated testing data for the ground-truth and predicted segmentation masks.
Results: Within the testing data, the ground truth was moderately positively correlated with the predictions with a correlation coefficient of 0.4962. It is anticipated that the correlation will improve as more training data is prepared. Visual examination of the predicted masks confirmed that the model is approaching an improved capacity in identifying the image signatures of motion artifacts.
Conclusion: This network has potential to be a useful clinical tool, enabling quality tracking systems to detect and quantify the presence of artifacts and compare this metric to other CT acquisition parameters.
Not Applicable / None Entered.
Not Applicable / None Entered.