Purpose: To compare methods for quantifying low contrast resolution for monthly CBCT QA using manual and semi-automatic software analysis.
Methods: The Phantom Laboratory’s Catphan 504 CBCT images containing sub-slice and supra-slice low contrast disks were evaluated for twelve months. The number of supra-slice disks visible out of a maximum of twenty-seven was recorded after visual inspection of the images. Using Radiological Imaging Technology’s (RIT) Radia Complete Imaging QA software analysis, a central slice in the CT515 module was selected and analyzed for each month. Selecting the same central slice as the middle, an average of seven slices was taken and evaluated using the same software. The number of disks detected using each method: manual, single slice, and multi-slice, was then compared using Bland Altman analysis, and paired two-sample t-test.
Results: In all three
methods: manual vs. RIT single slice, manual vs. RIT seven slice averaging, and RIT single slice vs. RIT seven slice averaging, the differences were statistically significant within a significance interval of α=0.05. The degree of statistically significant difference was lowest between the manual vs. RIT single slice (t=-3.70, p=0.0018), increasing in the RIT single slice vs. RIT seven slice averaging (t=-3.94, p=0.0012), and reaching a maximum in the manual vs. RIT seven slice averaging (t=-12.50, p=3.83E-8). This maximum difference coincides with higher low contrast disk detectability with the RIT seven slice averaging method, and lower disk detectability with manual user observation. Bland Altman analysis shows an increasing bias going from manual to seven-slice averaging, and generally wide limits of agreement.
Conclusion: Semi-automatic multi-slice averaging greatly improves supra-slice low contrast disk detectability in the CT515 module. The method can provide users with a reliable mean to accurately assess monthly CBCT low contrast module performance.