Abstract
Simultaneous electrocardiography (ECG) and phonocardiogram (PCG) offer a multimodal view of cardiac function by capturing electrical and mechanical activity, respectively. However, their shared and unique information and potential for mutual reconstruction remain poorly understood across different physiological states and individuals. This study analyzes the EPHNOGRAM dataset of simultaneous ECG-PCG recordings during rest and exercise, using linear and nonlinear models-including a non-causal neural network-to reconstruct one modality from the other. Nonlinear models, especially non-causal neural network, outperform others, with ECG reconstruction from PCG proving more feasible. In the within-subject scenario, the non-causal neural network achieved a signal-to-noise ratio (SNR) of 6.5±5.2 dB and a cross-correlation of 0.78 ± 0.19 for PCG-based ECG reconstruction. These findings provide quantitative insight into the electromechanical relationship between cardiac signals and support the development of multimodal cardiac monitoring tools.