Prediction Variability to Identify Reduced AI Performance in Cancer Diagnosis at MRI and CT

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Standard

Prediction Variability to Identify Reduced AI Performance in Cancer Diagnosis at MRI and CT. / Alves, Natália; Bosma, Joeran S.; Venkadesh, Kiran V.; Jacobs, Colin; Saghir, Zaigham; de Rooij, Maarten; Hermans, John; Huisman, Henkjan.

I: Radiology, Bind 308, Nr. 3, e230275, 2023.

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Harvard

Alves, N, Bosma, JS, Venkadesh, KV, Jacobs, C, Saghir, Z, de Rooij, M, Hermans, J & Huisman, H 2023, 'Prediction Variability to Identify Reduced AI Performance in Cancer Diagnosis at MRI and CT', Radiology, bind 308, nr. 3, e230275. https://doi.org/10.1148/radiol.230275

APA

Alves, N., Bosma, J. S., Venkadesh, K. V., Jacobs, C., Saghir, Z., de Rooij, M., Hermans, J., & Huisman, H. (2023). Prediction Variability to Identify Reduced AI Performance in Cancer Diagnosis at MRI and CT. Radiology, 308(3), [e230275]. https://doi.org/10.1148/radiol.230275

Vancouver

Alves N, Bosma JS, Venkadesh KV, Jacobs C, Saghir Z, de Rooij M o.a. Prediction Variability to Identify Reduced AI Performance in Cancer Diagnosis at MRI and CT. Radiology. 2023;308(3). e230275. https://doi.org/10.1148/radiol.230275

Author

Alves, Natália ; Bosma, Joeran S. ; Venkadesh, Kiran V. ; Jacobs, Colin ; Saghir, Zaigham ; de Rooij, Maarten ; Hermans, John ; Huisman, Henkjan. / Prediction Variability to Identify Reduced AI Performance in Cancer Diagnosis at MRI and CT. I: Radiology. 2023 ; Bind 308, Nr. 3.

Bibtex

@article{16bea4f447a04eb18f213c7ed9c21466,
title = "Prediction Variability to Identify Reduced AI Performance in Cancer Diagnosis at MRI and CT",
abstract = "Background: A priori identification of patients at risk of artificial intelligence (AI) failure in diagnosing cancer would contribute to the safer clinical integration of diagnostic algorithms. Purpose: To evaluate AI prediction variability as an uncertainty quantification (UQ) metric for identifying cases at risk of AI failure in diagnosing cancer at MRI and CT across different cancer types, data sets, and algorithms. Materials and Methods: Multicenter data sets and publicly available AI algorithms from three previous studies that evaluated detection of pancreatic cancer on contrast-enhanced CT images, detection of prostate cancer on MRI scans, and prediction of pulmonary nodule malignancy on low-dose CT images were analyzed retrospectively. Each task{\textquoteright}s algorithm was extended to generate an uncertainty score based on ensemble prediction variability. AI accuracy percentage and partial area under the receiver operating characteristic curve (pAUC) were compared between certain and uncertain patient groups in a range of percentile thresholds (10%–90%) for the uncertainty score using permutation tests for statistical significance. The pulmonary nodule malignancy prediction algorithm was compared with 11 clinical readers for the certain group (CG) and uncertain group (UG). Results: In total, 18 022 images were used for training and 838 images were used for testing. AI diagnostic accuracy was higher for the cases in the CG across all tasks (P < .001). At an 80% threshold of certain predictions, accuracy in the CG was 21%–29% higher than in the UG and 4%–6% higher than in the overall test data sets. The lesion-level pAUC in the CG was 0.25–0.39 higher than in the UG and 0.05–0.08 higher than in the overall test data sets (P < .001). For pulmonary nodule malignancy prediction, accuracy of AI was on par with clinicians for cases in the CG (AI results vs clinician results, 80% [95% CI: 76, 85] vs 78% [95% CI: 70, 87]; P = .07) but worse for cases in the UG (AI results vs clinician results, 50% [95% CI: 37, 64] vs 68% [95% CI: 60, 76]; P < .001). Conclusion: An AI-prediction UQ metric consistently identified reduced performance of AI in cancer diagnosis.",
author = "Nat{\'a}lia Alves and Bosma, {Joeran S.} and Venkadesh, {Kiran V.} and Colin Jacobs and Zaigham Saghir and {de Rooij}, Maarten and John Hermans and Henkjan Huisman",
note = "Publisher Copyright: {\textcopyright} RSNA, 2023.",
year = "2023",
doi = "10.1148/radiol.230275",
language = "English",
volume = "308",
journal = "Radiology",
issn = "0033-8419",
publisher = "Radiological Society of North America, Inc.",
number = "3",

}

RIS

TY - JOUR

T1 - Prediction Variability to Identify Reduced AI Performance in Cancer Diagnosis at MRI and CT

AU - Alves, Natália

AU - Bosma, Joeran S.

AU - Venkadesh, Kiran V.

AU - Jacobs, Colin

AU - Saghir, Zaigham

AU - de Rooij, Maarten

AU - Hermans, John

AU - Huisman, Henkjan

N1 - Publisher Copyright: © RSNA, 2023.

PY - 2023

Y1 - 2023

N2 - Background: A priori identification of patients at risk of artificial intelligence (AI) failure in diagnosing cancer would contribute to the safer clinical integration of diagnostic algorithms. Purpose: To evaluate AI prediction variability as an uncertainty quantification (UQ) metric for identifying cases at risk of AI failure in diagnosing cancer at MRI and CT across different cancer types, data sets, and algorithms. Materials and Methods: Multicenter data sets and publicly available AI algorithms from three previous studies that evaluated detection of pancreatic cancer on contrast-enhanced CT images, detection of prostate cancer on MRI scans, and prediction of pulmonary nodule malignancy on low-dose CT images were analyzed retrospectively. Each task’s algorithm was extended to generate an uncertainty score based on ensemble prediction variability. AI accuracy percentage and partial area under the receiver operating characteristic curve (pAUC) were compared between certain and uncertain patient groups in a range of percentile thresholds (10%–90%) for the uncertainty score using permutation tests for statistical significance. The pulmonary nodule malignancy prediction algorithm was compared with 11 clinical readers for the certain group (CG) and uncertain group (UG). Results: In total, 18 022 images were used for training and 838 images were used for testing. AI diagnostic accuracy was higher for the cases in the CG across all tasks (P < .001). At an 80% threshold of certain predictions, accuracy in the CG was 21%–29% higher than in the UG and 4%–6% higher than in the overall test data sets. The lesion-level pAUC in the CG was 0.25–0.39 higher than in the UG and 0.05–0.08 higher than in the overall test data sets (P < .001). For pulmonary nodule malignancy prediction, accuracy of AI was on par with clinicians for cases in the CG (AI results vs clinician results, 80% [95% CI: 76, 85] vs 78% [95% CI: 70, 87]; P = .07) but worse for cases in the UG (AI results vs clinician results, 50% [95% CI: 37, 64] vs 68% [95% CI: 60, 76]; P < .001). Conclusion: An AI-prediction UQ metric consistently identified reduced performance of AI in cancer diagnosis.

AB - Background: A priori identification of patients at risk of artificial intelligence (AI) failure in diagnosing cancer would contribute to the safer clinical integration of diagnostic algorithms. Purpose: To evaluate AI prediction variability as an uncertainty quantification (UQ) metric for identifying cases at risk of AI failure in diagnosing cancer at MRI and CT across different cancer types, data sets, and algorithms. Materials and Methods: Multicenter data sets and publicly available AI algorithms from three previous studies that evaluated detection of pancreatic cancer on contrast-enhanced CT images, detection of prostate cancer on MRI scans, and prediction of pulmonary nodule malignancy on low-dose CT images were analyzed retrospectively. Each task’s algorithm was extended to generate an uncertainty score based on ensemble prediction variability. AI accuracy percentage and partial area under the receiver operating characteristic curve (pAUC) were compared between certain and uncertain patient groups in a range of percentile thresholds (10%–90%) for the uncertainty score using permutation tests for statistical significance. The pulmonary nodule malignancy prediction algorithm was compared with 11 clinical readers for the certain group (CG) and uncertain group (UG). Results: In total, 18 022 images were used for training and 838 images were used for testing. AI diagnostic accuracy was higher for the cases in the CG across all tasks (P < .001). At an 80% threshold of certain predictions, accuracy in the CG was 21%–29% higher than in the UG and 4%–6% higher than in the overall test data sets. The lesion-level pAUC in the CG was 0.25–0.39 higher than in the UG and 0.05–0.08 higher than in the overall test data sets (P < .001). For pulmonary nodule malignancy prediction, accuracy of AI was on par with clinicians for cases in the CG (AI results vs clinician results, 80% [95% CI: 76, 85] vs 78% [95% CI: 70, 87]; P = .07) but worse for cases in the UG (AI results vs clinician results, 50% [95% CI: 37, 64] vs 68% [95% CI: 60, 76]; P < .001). Conclusion: An AI-prediction UQ metric consistently identified reduced performance of AI in cancer diagnosis.

U2 - 10.1148/radiol.230275

DO - 10.1148/radiol.230275

M3 - Journal article

C2 - 37724961

AN - SCOPUS:85171900638

VL - 308

JO - Radiology

JF - Radiology

SN - 0033-8419

IS - 3

M1 - e230275

ER -

ID: 386562944