Click here to

Session: Data Science Robustness, Performance, and Data Harmonization [Return to Session]

Towards Fairness in Artificial Intelligence for Medical Imaging: Identification and Mitigation of Biases in the Roadmap From Data Commons to Model Design and Deployment

K Drukker1*, W Chen2, J Gichoya3, N Gruszauskas1, J Kalpathy-Cramer4, S Koyejo5, K Myers6, R Sa7, B Sahiner2, H Whitney1,8, Z Zhang9, M Giger1, (1) University of Chicago, Chicago, IL, (2) Food & Drug Administration, Silver Spring, MD, (3) Emory University,(4) MGH/Harvard,Boston, MA, (5) University Of Illinois Urbana-champaign,(6) Food and Drug Administration, retired, (7) NIH/University Of California San Diego,(8) Wheaton College, Wheaton, IL, (9) Jefferson Health

Presentations

SU-H430-IePD-F6-5 (Sunday, 7/10/2022) 4:30 PM - 5:00 PM [Eastern Time (GMT-4)]

Exhibit Hall | Forum 6

Purpose: There has been increased interest in developing medical imaging-based machine learning models/methods for the detection, diagnosis, prognosis, and risk assessment of disease with the goal of clinical implementation. It is known, however, that models not only often fail to generalize but also propagate or amplify biases introduced in the many steps from model inception to deployment, resulting in a systematic difference in treatment of different groups, or unfairness. It is thus imperative to identify the source of - and mitigate - bias, and to understand and measure fairness.

Methods: Our multi-institutional team included medical physicists, medical imaging artificial intelligence/machine learning (AI/ML) researchers, experts in AI/ML bias, statisticians, physicians, and scientists from regulatory bodies. Sources of bias in AI/ML relevant to medical imaging were identified based on the literature and our own experience. Bias mitigation strategies were determined and translated into recommendations for best practices in AI/ML development in general and within the Medical Imaging and Data Resource Center (MIDRC) (midrc.org).

Results: Five main steps along the roadmap of medical imaging AI development were identified: 1) data collection, 2) data preparation and annotation, 3) model development, 4) model evaluation, and 5) model deployment. Within these steps, we found over 30 sources of bias, some of which can arise in multiple steps. Our findings and recommendations are being translated into an online AI/ML bias educational tool.

Conclusion: Medical imaging AI/ML models are intended to help improve traditional human decision-making. However, biases introduced in the steps along the roadmap towards clinical deployment may impede their intended function, potentially exacerbating inequities. Recognizing and addressing bias is essential for algorithmic fairness and trustworthiness. We identified potential biases and mitigation strategies relevant for medical imaging AI/ML, and are making these findings available to researchers, clinicians, and the public at large as an online tool.

Funding Support, Disclosures, and Conflict of Interest: Research reported is part of MIDRC and was made possible by the National Institute of Biomedical Imaging and Bioengineering (NIBIB) of the National Institutes of Health under contracts 75N92020C00008 and 75N92020C00021. RS is supported by NIH through the Data and Technology Advancement (DATA) National Service Scholar program.

Keywords

CAD

Taxonomy

Not Applicable / None Entered.

Contact Email

Share: