研究动态
Articles below are published ahead of final publication in an issue. Please cite articles in the following format: authors, (year), title, journal, DOI.

使用分布式合成学习挖掘多中心异构医疗数据。

Mining multi-center heterogeneous medical data with distributed synthetic learning.

发表日期:2023 Sep 07
作者: Qi Chang, Zhennan Yan, Mu Zhou, Hui Qu, Xiaoxiao He, Han Zhang, Lohendran Baskaran, Subhi Al'Aref, Hongsheng Li, Shaoting Zhang, Dimitris N Metaxas
来源: HEART & LUNG

摘要:

在医疗分析中,由于隐私保护和医疗系统中的数据异构性,克服多中心数据使用的障碍是具有挑战性的。在本研究中,我们提出了分布式合成学习(DSL)架构,以跨多个医疗中心学习并确保个人敏感信息的保护。DSL通过基于生成对抗网络的合成学习的形式生成一种完全合成的医学影像数据集,以建立一个均质数据集。所提出的DSL架构具有以下关键功能:多模态学习、缺失模态补全学习和持续学习。我们系统评估了DSL在心脏计算机断层扫描血管造影(CTA)、脑肿瘤磁共振成像以及组织病理学细胞核数据集上的性能。大量实验证明了DSL作为高质量的合成医学影像提供者的优越性能,通过使用理想的合成质量指标Dist-FID来评估。我们展示了DSL可以适应异构数据,并且在分割模态不对齐的真实数据集上的性能超过实际模态分割模型55%,超过时间序列数据集分割模型8%。© 2023. Springer Nature Limited.
Overcoming barriers on the use of multi-center data for medical analytics is challenging due to privacy protection and data heterogeneity in the healthcare system. In this study, we propose the Distributed Synthetic Learning (DSL) architecture to learn across multiple medical centers and ensure the protection of sensitive personal information. DSL enables the building of a homogeneous dataset with entirely synthetic medical images via a form of GAN-based synthetic learning. The proposed DSL architecture has the following key functionalities: multi-modality learning, missing modality completion learning, and continual learning. We systematically evaluate the performance of DSL on different medical applications using cardiac computed tomography angiography (CTA), brain tumor MRI, and histopathology nuclei datasets. Extensive experiments demonstrate the superior performance of DSL as a high-quality synthetic medical image provider by the use of an ideal synthetic quality metric called Dist-FID. We show that DSL can be adapted to heterogeneous data and remarkably outperforms the real misaligned modalities segmentation model by 55% and the temporal datasets segmentation model by 8%.© 2023. Springer Nature Limited.