一种深度学习模型,用于提升基于不完整多模态影像(X线、CT和MRI)分类原发性骨肿瘤的准确性
A deep learning model to enhance the classification of primary bone tumors based on incomplete multimodal images in X-ray, CT, and MRI
DOI 原文链接
用sci-hub下载
如无法下载,请从 Sci-Hub 选择可用站点尝试。
影响因子:3.5
分区:医学2区 / 肿瘤学2区 核医学2区
发表日期:2024 Oct 10
作者:
Liwen Song, Chuanpu Li, Lilian Tan, Menghong Wang, Xiaqing Chen, Qiang Ye, Shisi Li, Rui Zhang, Qinghai Zeng, Zhuoyao Xie, Wei Yang, Yinghua Zhao
DOI:
10.1186/s40644-024-00784-7
摘要
准确分类原发性骨肿瘤对指导治疗决策至关重要。国家综合癌症网络指南建议采用多模态影像以提供多角度的全面评估,但在临床实践中,大多数患者的多模态影像资料常常不完整。本研究旨在结合患者的不完整X线、CT和MRI影像及临床特征,建立一种深度学习模型,用于将原发性骨肿瘤分类为良性、中间或恶性。在回顾性研究中,纳入2010年1月至2022年12月期间两中心的1305例组织学确诊的原发性骨肿瘤患者(内部数据集1043例;外部数据集262例)。提出一种名为Primary Bone Tumor Classification Transformer Network(PBTC-TransNet)的融合模型,用于分类原发性骨肿瘤。采用受试者工作特征曲线(AUC)、准确率、敏感性和特异性评估模型的分类性能。结果显示,该模型在内部和外部测试集上的微平均AUC分别达到0.847(95% CI:0.832, 0.862)和0.782(95% CI:0.749, 0.817)。对良性、中间和恶性肿瘤的分类,模型在内部/外部测试集上分别达到AUC值0.827/0.727、0.740/0.662和0.815/0.745。此外,按影像模态分层的所有患者亚组中,PBTC-TransNet融合模型在内部和外部测试集上的微平均AUC范围分别为0.700至0.909和0.640至0.847。在仅有X线的内部测试集患者中,模型表现出最高微平均AUC为0.909,准确率84.3%,微平均敏感性84.3%,微平均特异性92.1%。在外部测试集中,模型在X线+CT患者中获得最高微平均AUC为0.847。我们成功开发并在外部验证了基于Transformer的PBTC-TransNet融合模型,用于有效分类原发性骨肿瘤。该模型基于不完整的多模态影像和临床特征,切实反映临床实际场景,增强其临床实用性。
Abstract
Accurately classifying primary bone tumors is crucial for guiding therapeutic decisions. The National Comprehensive Cancer Network guidelines recommend multimodal images to provide different perspectives for the comprehensive evaluation of primary bone tumors. However, in clinical practice, most patients' medical multimodal images are often incomplete. This study aimed to build a deep learning model using patients' incomplete multimodal images from X-ray, CT, and MRI alongside clinical characteristics to classify primary bone tumors as benign, intermediate, or malignant.In this retrospective study, a total of 1305 patients with histopathologically confirmed primary bone tumors (internal dataset, n = 1043; external dataset, n = 262) were included from two centers between January 2010 and December 2022. We proposed a Primary Bone Tumor Classification Transformer Network (PBTC-TransNet) fusion model to classify primary bone tumors. Areas under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity were calculated to evaluate the model's classification performance.The PBTC-TransNet fusion model achieved satisfactory micro-average AUCs of 0.847 (95% CI: 0.832, 0.862) and 0.782 (95% CI: 0.749, 0.817) on the internal and external test sets. For the classification of benign, intermediate, and malignant primary bone tumors, the model respectively achieved AUCs of 0.827/0.727, 0.740/0.662, and 0.815/0.745 on the internal/external test sets. Furthermore, across all patient subgroups stratified by the distribution of imaging modalities, the PBTC-TransNet fusion model gained micro-average AUCs ranging from 0.700 to 0.909 and 0.640 to 0.847 on the internal and external test sets, respectively. The model showed the highest micro-average AUC of 0.909, accuracy of 84.3%, micro-average sensitivity of 84.3%, and micro-average specificity of 92.1% in those with only X-rays on the internal test set. On the external test set, the PBTC-TransNet fusion model gained the highest micro-average AUC of 0.847 for patients with X-ray + CT.We successfully developed and externally validated the transformer-based PBTC-Transnet fusion model for the effective classification of primary bone tumors. This model, rooted in incomplete multimodal images and clinical characteristics, effectively mirrors real-life clinical scenarios, thus enhancing its strong clinical practicability.