HybridSense-LLM: A Structured Multimodal Framework for Large-Language-Model-Based Wellness Prediction from Wearable Sensors with Contextual Self-Reports

HybridSense-LLM:一种基于可穿戴传感器和情境化自我报告的、基于大型语言模型的结构化多模态健康预测框架

阅读:4

Abstract

Wearable sensors generate continuous physiological and behavioral data at a population scale, yet wellness prediction remains limited by noisy measurements, irregular sampling, and subjective outcomes. We introduce HybridSense, a unified framework that integrates raw wearable signals and their statistical descriptors with large language model-based reasoning to produce accurate and interpretable estimates of stress, fatigue, readiness, and sleep quality. Using the PMData dataset, minute-level heart rate and activity logs are transformed into daily statistical features, whose relevance is ranked using a Random Forest model. These features, together with short waveform segments, are embedded into structured prompts and evaluated across seven prompting strategies using three large language model families: OpenAI 4o-mini, Gemini 2.0 Flash, and DeepSeek Chat. Bootstrap analyses demonstrate robust, task-dependent performance. Zero-shot prompting performs best for fatigue and stress, while few-shot prompting improves sleep-quality estimation. HybridSense further enhances readiness prediction by combining high-level descriptors with waveform context, and self-consistency and tree-of-thought prompting stabilize predictions for highly variable targets. All evaluated models exhibit low inference cost and practical latency. These results suggest that prompt-driven large language model reasoning, when paired with interpretable signal features, offers a scalable and transparent approach to wellness prediction from consumer wearable data.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。