MSKD:面向高效医学图像分割的结构化知识蒸馏。
MSKD: Structured knowledge distillation for efficient medical image segmentation.
发表日期:2023 Aug 02
作者:
Libo Zhao, Xiaolong Qian, Yinghui Guo, Jiaqi Song, Jinbao Hou, Jun Gong
来源:
COMPUTERS IN BIOLOGY AND MEDICINE
摘要:
近年来,深度学习通过开发强大的深度神经网络,在医学图像分割领域引起了革命性的变革。然而,这些模型往往复杂且计算需求量大,给在临床环境中实际应用带来挑战。为了解决这个问题,我们提出了一种高效的结构化知识蒸馏框架,利用强大的教师网络来帮助训练轻量级的学生网络。具体而言,我们提出了特征过滤蒸馏方法,该方法侧重于传输区域级别的语义信息,同时最小化从教师到学生网络的冗余信息传输。这种方法有效地缓解了由于内部器官特征相似而导致的分割不准确的问题。此外,我们还提出了区域图蒸馏方法,该方法利用图的高阶表达能力,使学生网络能够更好地模仿教师的结构化语义信息。为了验证我们提出的方法的有效性,我们使用各种网络模型在Synapse多器官分割和KiTS肾肿瘤分割数据集上进行了实验证明。结果表明,我们的方法显著提高了轻量级神经网络的分割性能,Dice系数提高了高达18.56%。重要的是,我们的方法在不引入额外模型参数的情况下实现了这些改进。总体而言,我们提出的知识蒸馏方法为高效的医学图像分割提供了可行的解决方案,使医学专家能够进行更准确的诊断和改善患者治疗。© 2023 Elsevier Ltd.版权所有。
In recent years, deep learning has revolutionized the field of medical image segmentation by enabling the development of powerful deep neural networks. However, these models tend to be complex and computationally demanding, posing challenges for practical implementation in clinical settings. To address this issue, we propose an efficient structured knowledge distillation framework that leverages a powerful teacher network to assist in training a lightweight student network. Specifically, we propose the Feature Filtering Distillation method, which focuses on transferring region-level semantic information while minimizing redundant information transmission from the teacher to the student network. This approach effectively mitigates the problem of inaccurate segmentation caused by similar internal organ characteristics. Additionally, we propose the Region Graph Distillation method, which exploits the higher-order representational capabilities of graphs to enable the student network to better imitate structured semantic information from the teacher. To validate the effectiveness of our proposed methods, we conducted experiments on the Synapse multi-organ segmentation and KiTS kidney tumor segmentation datasets using various network models. The results demonstrate that our method significantly improves the segmentation performance of lightweight neural networks, with improvements of up to 18.56% in Dice coefficient. Importantly, our approach achieves these improvements without introducing additional model parameters. Overall, our proposed knowledge distillation methods offer a promising solution for efficient medical image segmentation, empowering medical experts to make more accurate diagnoses and improve patient treatment.Copyright © 2023 Elsevier Ltd. All rights reserved.