研究动态
Articles below are published ahead of final publication in an issue. Please cite articles in the following format: authors, (year), title, journal, DOI.

使用深度学习监测饮食行为的智能眼镜中光学跟踪传感器的受控和现实研究:横断面研究。

Controlled and Real-Life Investigation of Optical Tracking Sensors in Smart Glasses for Monitoring Eating Behavior Using Deep Learning: Cross-Sectional Study.

发表日期:2024 Sep 26
作者: Simon Stankoski, Ivana Kiprijanovska, Martin Gjoreski, Filip Panchevski, Borjan Sazdov, Bojan Sofronievski, Andrew Cleal, Mohsen Fatoorechi, Charles Nduka, Hristijan Gjoreski
来源: JMIR mHealth and uHealth

摘要:

肥胖症的患病率日益增加,需要采取创新方法来更好地了解这一健康危机,特别是考虑到肥胖症与糖尿病、癌症和心血管疾病等慢性疾病密切相关。监测饮食行为对于设计有助于降低肥胖患病率和促进健康生活方式的有效干预措施至关重要。然而,传统的饮食追踪方法受到参与者负担和回忆偏差的限制。除了进食次数外,探索微观层面的饮食活动(例如进餐时间和咀嚼频率)也至关重要,因为它们与肥胖和疾病风险密切相关。该研究的主要目的是开发一种准确且无创的系统来自动监测饮食以及使用配备传感器的智能眼镜进行咀嚼活动。该系统将咀嚼与其他面部活动区分开来,例如说话和咬紧牙关。第二个目标是结合实验室控制和现实生活中的用户研究来评估系统在未见过的测试用户上的性能。与专注于检测完整进食事件的最先进研究不同,我们的方法通过专门检测每个进食事件中的咀嚼部分来提供更精细的分析。该研究使用嵌入智能眼镜中的 OCO 光学传感器来监测相关的面部肌肉激活进食和咀嚼活动。传感器测量皮肤表面二维(X 和 Y)的相对运动。使用深度学习 (DL) 分析来自这些传感器的数据,以区分咀嚼和其他面部活动。为了解决现实生活中咀嚼事件之间的时间依赖性,我们集成了隐马尔可夫模型作为分析 DL 模型输出的附加组件。平均传感器激活的统计测试揭示了所有 6 个比较对之间的统计显着差异(P<. 001)涉及 2 个传感器(脸颊和太阳穴)和 3 个面部活动(吃饭、咬牙和说话)。这些结果证明了传感器数据的敏感性。此外,卷积长短期记忆模型是卷积和长短期记忆神经网络的组合,成为咀嚼检测中性能最佳的深度学习模型。在受控实验室环境中,该模型的 F1 得分为 0.91,表现出稳健的性能。在现实生活场景中,该系统在检测饮食片段方面表现出高精度(0.95)和召回率(0.82)。现实生活研究中评估的咀嚼率和咀嚼次数与预期的现实生活饮食行为一致。该研究代表了饮食监测和健康技术的重大进步。通过提供可靠且非侵入性的饮食行为跟踪方法,它有可能彻底改变饮食数据的收集和使用方式。这可能会导致更有效的健康干预措施,并更好地了解影响饮食习惯的因素及其对健康的影响。©Simon Stankoski、Ivana Kiprijanovska、Martin Gjoreski、Filip Panchevski、Borjan Sazdov、Bojan Sofronievski、Andrew Cleal、Mohsen Fatoorechi、Charles Nduka ,赫里斯蒂扬·乔雷斯基。最初发表于 JMIR mHealth 和 uHealth (https://mhealth.jmir.org),2024 年 9 月 26 日。
The increasing prevalence of obesity necessitates innovative approaches to better understand this health crisis, particularly given its strong connection to chronic diseases such as diabetes, cancer, and cardiovascular conditions. Monitoring dietary behavior is crucial for designing effective interventions that help decrease obesity prevalence and promote healthy lifestyles. However, traditional dietary tracking methods are limited by participant burden and recall bias. Exploring microlevel eating activities, such as meal duration and chewing frequency, in addition to eating episodes, is crucial due to their substantial relation to obesity and disease risk.The primary objective of the study was to develop an accurate and noninvasive system for automatically monitoring eating and chewing activities using sensor-equipped smart glasses. The system distinguishes chewing from other facial activities, such as speaking and teeth clenching. The secondary objective was to evaluate the system's performance on unseen test users using a combination of laboratory-controlled and real-life user studies. Unlike state-of-the-art studies that focus on detecting full eating episodes, our approach provides a more granular analysis by specifically detecting chewing segments within each eating episode.The study uses OCO optical sensors embedded in smart glasses to monitor facial muscle activations related to eating and chewing activities. The sensors measure relative movements on the skin's surface in 2 dimensions (X and Y). Data from these sensors are analyzed using deep learning (DL) to distinguish chewing from other facial activities. To address the temporal dependence between chewing events in real life, we integrate a hidden Markov model as an additional component that analyzes the output from the DL model.Statistical tests of mean sensor activations revealed statistically significant differences across all 6 comparison pairs (P<.001) involving 2 sensors (cheeks and temple) and 3 facial activities (eating, clenching, and speaking). These results demonstrate the sensitivity of the sensor data. Furthermore, the convolutional long short-term memory model, which is a combination of convolutional and long short-term memory neural networks, emerged as the best-performing DL model for chewing detection. In controlled laboratory settings, the model achieved an F1-score of 0.91, demonstrating robust performance. In real-life scenarios, the system demonstrated high precision (0.95) and recall (0.82) for detecting eating segments. The chewing rates and the number of chews evaluated in the real-life study showed consistency with expected real-life eating behaviors.The study represents a substantial advancement in dietary monitoring and health technology. By providing a reliable and noninvasive method for tracking eating behavior, it has the potential to revolutionize how dietary data are collected and used. This could lead to more effective health interventions and a better understanding of the factors influencing eating habits and their health implications.©Simon Stankoski, Ivana Kiprijanovska, Martin Gjoreski, Filip Panchevski, Borjan Sazdov, Bojan Sofronievski, Andrew Cleal, Mohsen Fatoorechi, Charles Nduka, Hristijan Gjoreski. Originally published in JMIR mHealth and uHealth (https://mhealth.jmir.org), 26.09.2024.