前沿快讯
聚焦肿瘤与肿瘤类器官最新研究,动态一手掌握。

Brainsegfunder:迈向神经图像分割的3D基础模型

BrainSegFounder: Towards 3D foundation models for neuroimage segmentation

影响因子:11.80000
分区:医学1区 Top / 计算机:人工智能1区 计算机:跨学科应用1区 工程:生物医学1区 核医学1区
发表日期:2024 Oct
作者: Joseph Cox, Peng Liu, Skylar E Stolte, Yunchao Yang, Kang Liu, Kyle B See, Huiwen Ju, Ruogu Fang

摘要

大脑健康研究的新兴领域越来越利用人工智能(AI)来分析和解释神经影像学数据。医疗基础模型显示出具有更高样本效率的出色性能的希望。这项工作介绍了一种新型的方法,用于创建三维(3D)医学基础模型,以通过自我监督的训练来为多模式神经图像分割。我们的方法涉及使用视觉变压器的新型两阶段预处理方法。第一阶段通过来自41,400名参与者的多模式脑磁共振成像(MRI)图像的大规模无标记的神经图像数据集编码整体健康大脑中的解剖结构。有关的这一阶段着重于识别关键特征,例如不同大脑结构的形状和大小。第二阶段的阶段确定了疾病特异性的属性,例如肿瘤和病变的几何形状以及大脑内的空间位置。这种双相方法可显着降低神经图像分割中AI模型训练通常所需的广泛数据要求,以适应各种成像方式。我们使用脑肿瘤分割(BRAT)挑战(Brats)挑战和中风V2.0(Atlas v2.0)数据集后的脑肿瘤分割(BRAT)挑战(Atlas v2.0)数据集对模型进行了严格评估。 BrainSegfungunder展示了巨大的性能增长,并使用完全监督的学习超过了以前的获胜解决方案的成就。我们的发现强调了扩大模型复杂性和来自通常健康大脑的未标记训练数据的影响的影响。这两个因素都提高了模型分割任务中模型的准确性和预测能力。我们验证的模型和代码在https://github.com/lab-smile/brainsegfounder上。

Abstract

The burgeoning field of brain health research increasingly leverages artificial intelligence (AI) to analyze and interpret neuroimaging data. Medical foundation models have shown promise of superior performance with better sample efficiency. This work introduces a novel approach towards creating 3-dimensional (3D) medical foundation models for multimodal neuroimage segmentation through self-supervised training. Our approach involves a novel two-stage pretraining approach using vision transformers. The first stage encodes anatomical structures in generally healthy brains from the large-scale unlabeled neuroimage dataset of multimodal brain magnetic resonance imaging (MRI) images from 41,400 participants. This stage of pertaining focuses on identifying key features such as shapes and sizes of different brain structures. The second pretraining stage identifies disease-specific attributes, such as geometric shapes of tumors and lesions and spatial placements within the brain. This dual-phase methodology significantly reduces the extensive data requirements usually necessary for AI model training in neuroimage segmentation with the flexibility to adapt to various imaging modalities. We rigorously evaluate our model, BrainSegFounder, using the Brain Tumor Segmentation (BraTS) challenge and Anatomical Tracings of Lesions After Stroke v2.0 (ATLAS v2.0) datasets. BrainSegFounder demonstrates a significant performance gain, surpassing the achievements of the previous winning solutions using fully supervised learning. Our findings underscore the impact of scaling up both the model complexity and the volume of unlabeled training data derived from generally healthy brains. Both of these factors enhance the accuracy and predictive capabilities of the model in neuroimage segmentation tasks. Our pretrained models and code are at https://github.com/lab-smile/BrainSegFounder.