ePoster Forums
Purpose: Medical prediction models based on 2D images may lose the spatial context information, while 3D images may produce large amounts of redundant features, causing higher computational complexity. We are aiming to develop a 2D/3D integrated network which can fully utilize both 2D and 3D information for better prediction.
Methods: Our model simultaneously took 2D images and 3D images as input. 2D convolutional neural networks were used to extract 2D image features and 3D image features slice by slice. Then the model learned to compare 2D images with each 3D slice in pairs. The feature difference was calculated by incorporating 2D information to all slices in 3D images. We believe that the feature difference between 2D images and other slices can compensate for the lack of spatial depth variation in 2D images. In the prediction stage, the classification results based on 2D image features will be corrected based on feature difference.
Results: Validating on the test set of the breast tumor dataset, we achieved higher quality of classification that sensitivity is 0.647 and specificity is 0.741 which is more balanced than either pure 2D or 3D model. Furthermore, our model can obtain more accurate classification accuracy of 0.705 and AUC of 0.739, while the quantity of parameters is only 20.53% of that of 3D model.
Conclusion: In this study, we proposed a 2D/3D integrated network for benign–malignant breast tumor classification. This model simultaneously extracts 2D and 3D spatial contexts information which is described by feature difference between slices. Compared to 2D or 3D based models, the model has superior classification performance: more balanced sensitivity and specificity, higher accuracy and AUC, and less computational cost. Therefore, this model has potential ability to be used in clinical practice for breast cancer prediction.
Funding Support, Disclosures, and Conflict of Interest: This work is supported by the Science and Technology Program of Shaanxi Province, China (No. 2021KW-01)