研究动态
Articles below are published ahead of final publication in an issue. Please cite articles in the following format: authors, (year), title, journal, DOI.

联合学习方法,结合师生知识蒸馏,用于设备上的乳腺癌图像分类。

Joint learning method with teacher-student knowledge distillation for on-device breast cancer image classification.

发表日期:2022 Dec 24
作者: Majid Sepahvand, Fardin Abdali-Mohammadi
来源: COMPUTERS IN BIOLOGY AND MEDICINE

摘要:

深度学习模型,例如AlexNet、VGG和ResNet,在BreakHis数据集中对乳腺癌病理学图像进行分类取得了良好的性能。然而,由于计算复杂性和参数过多,这些模型在实践中并不适用于计算资源有限的设备,因此它们很少被使用。本文基于知识蒸馏开发了一种轻量级学习模型,用于分类BreakHis中的乳腺癌病理学图像。该方法采用基于VGG和ResNext的两个教师模型来训练两个学生模型,这些学生模型在构建上类似于教师模型,但具有更少的深层结构。在所提出的方法中,采用自适应联合学习方法,将教师模型的最终层输出和中间层的特征映射作为黑暗知识传递给学生模型。根据实验结果,基于ResNeXt结构设计的学生模型能够对所有病理学图像实现97.09%的识别率。此外,该模型的参数数量比教师模型少了约6940万,GPU内存使用量也减少了约0.93 G,压缩率比教师模型高268.17倍。而在学生模型中,识别率仅降低了1.75%。比较结果表明,与BreakHis中的最先进方法相比,学生模型的输出相当可接受。Copyright © 2022. Elsevier Ltd. 发表。
The deep learning models such as AlexNet, VGG, and ResNet achieved a good performance in classifying the breast cancer histopathological images in BreakHis dataset. However, these models are not practically appropriate due to their computational complexity and too many parameters; as a result, they are rarely utilized on devices with limited computational resources. This paper develops a lightweight learning model based on knowledge distillation to classify the histopathological images of breast cancer in BreakHis. This method employs two teacher models based on VGG and ResNext to train two student models, which are similar to the teacher models in development but have fewer deep layers. In the proposed method, the adaptive joint learning approach is adopted to transfer the knowledge in the final-layer output of a teacher model along with the feature maps of its middle layers as the dark knowledge to a student model. According to the experimental results, the student model designed by ResNeXt architecture obtained the recognition rate 97.09% for all histopathological images. In addition, this model has ∼69.40 million fewer parameters, ∼0.93 G less GPU memory use, and 268.17 times greater compression rate than its teacher model. While in the student model the recognition rate merely dropped down to 1.75%. The comparisons indicated that the student model had a rather acceptable outputs compared with state-of-the-art methods in classifying the images of breast cancer in BreakHis.Copyright © 2022. Published by Elsevier Ltd.