Background: Hormone receptor (HR) status guides breast cancer therapy. Deep learning (DL) applied to contrast-enhanced mammography (CEM) might offer a noninvasive means for HR status prediction, but class imbalance challenges model development and assessment. This preliminary study investigates CEM-based DL for HR status prediction, focusing on class imbalance handling. Materials and methods: In this retrospective study, CEM tumor crops from 105 patients with invasive breast cancer were used. Patients were randomized into training (n = 68), validation (n = 16), and independent test (n = 21) sets. A "Residual Network 18" (ResNet-18), pretrained on ImageNet, was fine-tuned using weighted cross-entropy loss and an Adam optimizer. Model selection used validation area under the precision-recall curve (AUPRC); output probabilities were calibrated via temperature scaling. Performance was reported with accuracy, area under the receiver operating characteristic curve (AUROC), and imbalance-aware metrics (balanced accuracy, Matthews correlation coefficient (MCC)) with 95% confidence intervals (1,000-iteration bootstrap). Results are presented for standard (0.5) and optimized (validation F1-score for HR-negative class) thresholds. Results: Validation AUPRC (model selection metric) was 0.640 (0.304-0.906). On the independent test set (optimized threshold 0.755), the model achieved 91.9% accuracy (86.5-97.3%), AUROC 0.808 (0.648-0.935), balanced accuracy 0.700 (0.550-0.853), and MCC 0.605 (0.296-0.818). Conclusion: A ResNet-18, utilizing patient-level data splitting and imbalance-aware fine-tuning, can capture CEM features for HR status, performing well despite significant class imbalance. Generalizability is limited by dataset characteristics and acquisition specifics, warranting further validation in larger, diverse cohorts to establish clinical applicability. Relevance statement: This work explores whether routinely acquired CEM images contain enough information for DL prediction of HR status. A ResNet-18 was trained with weighted loss and patient-level data splits; performance was quantified with imbalance-aware metrics to provide a realistic assessment in a highly skewed dataset, highlighting both the promise and current constraints of CEM-based molecular imaging. Key points: A ResNet-18, optimized for class imbalance through weighted training and with calibrated probabilities, predicted HR positivity on CEM with 91.9% accuracy and AUROC 0.81 in an independent test cohort using an F1-tuned threshold. Balanced accuracy (0.70) and MCC (0.60) demonstrate maintained discrimination despite an approximate 85% class imbalance (HR-positive cases). Patient-level splitting was employed to ensure robust evaluation. Limitations related to the dataset's scope and specific imaging protocols may influence broader generalizability.

Deep-learning prediction of breast cancer hormone receptor status from CEM: a preliminary study

Carriero, Alessandro;Albera, Marco
;
Boldorini, Renzo Luciano;Colarieti, Anna
2025-01-01

Abstract

Background: Hormone receptor (HR) status guides breast cancer therapy. Deep learning (DL) applied to contrast-enhanced mammography (CEM) might offer a noninvasive means for HR status prediction, but class imbalance challenges model development and assessment. This preliminary study investigates CEM-based DL for HR status prediction, focusing on class imbalance handling. Materials and methods: In this retrospective study, CEM tumor crops from 105 patients with invasive breast cancer were used. Patients were randomized into training (n = 68), validation (n = 16), and independent test (n = 21) sets. A "Residual Network 18" (ResNet-18), pretrained on ImageNet, was fine-tuned using weighted cross-entropy loss and an Adam optimizer. Model selection used validation area under the precision-recall curve (AUPRC); output probabilities were calibrated via temperature scaling. Performance was reported with accuracy, area under the receiver operating characteristic curve (AUROC), and imbalance-aware metrics (balanced accuracy, Matthews correlation coefficient (MCC)) with 95% confidence intervals (1,000-iteration bootstrap). Results are presented for standard (0.5) and optimized (validation F1-score for HR-negative class) thresholds. Results: Validation AUPRC (model selection metric) was 0.640 (0.304-0.906). On the independent test set (optimized threshold 0.755), the model achieved 91.9% accuracy (86.5-97.3%), AUROC 0.808 (0.648-0.935), balanced accuracy 0.700 (0.550-0.853), and MCC 0.605 (0.296-0.818). Conclusion: A ResNet-18, utilizing patient-level data splitting and imbalance-aware fine-tuning, can capture CEM features for HR status, performing well despite significant class imbalance. Generalizability is limited by dataset characteristics and acquisition specifics, warranting further validation in larger, diverse cohorts to establish clinical applicability. Relevance statement: This work explores whether routinely acquired CEM images contain enough information for DL prediction of HR status. A ResNet-18 was trained with weighted loss and patient-level data splits; performance was quantified with imbalance-aware metrics to provide a realistic assessment in a highly skewed dataset, highlighting both the promise and current constraints of CEM-based molecular imaging. Key points: A ResNet-18, optimized for class imbalance through weighted training and with calibrated probabilities, predicted HR positivity on CEM with 91.9% accuracy and AUROC 0.81 in an independent test cohort using an F1-tuned threshold. Balanced accuracy (0.70) and MCC (0.60) demonstrate maintained discrimination despite an approximate 85% class imbalance (HR-positive cases). Patient-level splitting was employed to ensure robust evaluation. Limitations related to the dataset's scope and specific imaging protocols may influence broader generalizability.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11579/220902
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact