研究动态
Articles below are published ahead of final publication in an issue. Please cite articles in the following format: authors, (year), title, journal, DOI.

开发低成本、开源、本地制造的工作站和计算管道,用于使用深度学习进行自动化组织病理学评估。

Developing a low-cost, open-source, locally manufactured workstation and computational pipeline for automated histopathology evaluation using deep learning.

发表日期:2024 Aug 27
作者: Divya Choudhury, James M Dolezal, Emma Dyer, Sara Kochanny, Siddhi Ramesh, Frederick M Howard, Jayson R Margalus, Amelia Schroeder, Jefree Schulte, Marina C Garassino, Jakob N Kather, Alexander T Pearson
来源: EBioMedicine

摘要:

部署和获取最先进的精准医疗技术仍然是在资源匮乏地区提供公平的全球癌症护理的基本挑战。近年来数字病理学的扩展及其与诊断人工智能算法的潜在接口为实现个性化医疗的民主化提供了机会。然而,当前的数字病理工作站的成本高达数千至数十万美元。随着许多低收入和中等收入国家癌症发病率的上升,低成本自动化诊断工具的验证和实施对于帮助医疗保健提供者应对日益增长的癌症负担至关重要。这里我们介绍一种低成本(230 美元)工作站用于由开源组件组成的数字幻灯片捕获和计算分析。当深度学习模型用于评估使用此开源工作站捕获的病理图像与使用常见的、昂贵得多的硬件捕获的图像时,我们分析了深度学习模型的预测性能。验证研究评估了三个不同数据集和预测模型的模型性能:头颈鳞状细胞癌(HPV 阳性与 HPV 阴性)、肺癌(腺癌与鳞状细胞癌)和乳腺癌(浸润性导管癌与浸润性小叶癌)。与传统的病理图像捕获方法相比,使用开源工作站(包括低成本显微镜设备)进行的低成本数字载玻片捕获和分析与乳腺、肺和 HNSCC 分类的准确度相当的模型性能相关。在患者分析层面,HNSCC HPV 状态预测的 AUROC 为 0.84,肺癌亚型预测的 AUROC 为 1.0,乳腺癌分类的 AUROC 为 0.80。 尽管图像质量下降和低功耗计算硬件,我们仍能保持模型性能,这表明它是可以大幅降低与为数字病理学应用部署深度学习模型相关的成本。改善尖端诊断工具的获取可能为减少高收入和低收入地区之间癌症护理的差异提供一条途径。该项目的资金(包括人员支持)是通过 NIH/NCIR25-CA240134、NIH/NCIU01-CA243075 的赠款提供的、NIH/NIDCRR56-DE030958、NIH/NCIR01-CA276652、NIH/NCIK08-CA283261、NIH/NCI-SOAR25CA240134、SU2C(抵抗癌症)范可尼贫血研究基金 - Farrah Fawcett 基金会头颈癌症研究团队资助,以及欧盟地平线计划 (I3LUNG)。版权所有 © 2024。由 Elsevier B.V. 出版。
Deployment and access to state-of-the-art precision medicine technologies remains a fundamental challenge in providing equitable global cancer care in low-resource settings. The expansion of digital pathology in recent years and its potential interface with diagnostic artificial intelligence algorithms provides an opportunity to democratize access to personalized medicine. Current digital pathology workstations, however, cost thousands to hundreds of thousands of dollars. As cancer incidence rises in many low- and middle-income countries, the validation and implementation of low-cost automated diagnostic tools will be crucial to helping healthcare providers manage the growing burden of cancer.Here we describe a low-cost ($230) workstation for digital slide capture and computational analysis composed of open-source components. We analyze the predictive performance of deep learning models when they are used to evaluate pathology images captured using this open-source workstation versus images captured using common, significantly more expensive hardware. Validation studies assessed model performance on three distinct datasets and predictive models: head and neck squamous cell carcinoma (HPV positive versus HPV negative), lung cancer (adenocarcinoma versus squamous cell carcinoma), and breast cancer (invasive ductal carcinoma versus invasive lobular carcinoma).When compared to traditional pathology image capture methods, low-cost digital slide capture and analysis with the open-source workstation, including the low-cost microscope device, was associated with model performance of comparable accuracy for breast, lung, and HNSCC classification. At the patient level of analysis, AUROC was 0.84 for HNSCC HPV status prediction, 1.0 for lung cancer subtype prediction, and 0.80 for breast cancer classification.Our ability to maintain model performance despite decreased image quality and low-power computational hardware demonstrates that it is feasible to massively reduce costs associated with deploying deep learning models for digital pathology applications. Improving access to cutting-edge diagnostic tools may provide an avenue for reducing disparities in cancer care between high- and low-income regions.Funding for this project including personnel support was provided via grants from NIH/NCIR25-CA240134, NIH/NCIU01-CA243075, NIH/NIDCRR56-DE030958, NIH/NCIR01-CA276652, NIH/NCIK08-CA283261, NIH/NCI-SOAR25CA240134, SU2C (Stand Up to Cancer) Fanconi Anemia Research Fund - Farrah Fawcett Foundation Head and Neck Cancer Research Team Grant, and the European UnionHorizon Program (I3LUNG).Copyright © 2024. Published by Elsevier B.V.