Abstract
This article explores video analysis methods for monitoring eating behaviors, a critical factor in approximately 70% of global deaths due to illnesses like cancer, diabetes, and heart disease. Automated monitoring quantifies aspects such as meal duration, food types, and intake gestures (bite and drink gestures). Previous deep-learning methods segment videos into short clips (e.g., 16 frames at 8 Hz) for analysis, but this approach overlooks common meal-length patterns in gesture distribution across different individuals and sessions, which can enhance detection accuracy. Our study introduces a novel pipeline that analyzes the entire meal context (5-40 minutes). We propose a framework allowing a global detector to learn meal-length patterns with manageable computational demands. Additionally, we introduced a new augmentation technique to generate hundreds of meal-length feature samples per video, facilitating effective training of a global detector with limited video availability. Experimental results on two datasets (Clemson Cafeteria and EatSense) demonstrate that our pipeline significantly enhances the performance of state-of-the-art window-based networks, particularly in reducing false positives in gesture detection. On the Clemson Cafeteria dataset of 486 meal videos (the largest dataset to date), our method achieves F1 scores of 0.93 for bite gestures and 0.88 for drink gestures, substantially outperforming existing methodologies.