Abstract
Analysis and generation of conversational gestures, especially in multi-party settings, remains an open challenge in many fields, due to the lack of publicly available datasets, models, and standardized evaluation metrics. To address this gap, we introduce Multi-TPC, a multimodal dataset of three-party conversations featuring synchronized speech, motion, and gaze. Multi-TPC captures rich conversational dynamics, enabling the study of interactions between multiple participants. Our statistical analysis reveals correlations between gestures and various modalities, including audio, text, and speaker identity. Our dataset and model provide a foundation for advancing research in discourse analysis, human communication dynamics, and multimodal interaction.