Abstract
The size and material appearance of objects affect how much force we use to pick them up. We can also infer physical properties from how objects move when they collide, so it seems plausible that such cues might also affect our actions. To test this, we developed a hybrid virtual-reality/real-object paradigm in which seven participants viewed VR movies of collisions between visually identical spheres that varied only in apparent mass ratios and coefficients of restitution, while co-located real spheres mirrored the virtual objects' final positions. After watching each movie, participants were asked to pick each object up, while we measured the forces they used to do so. Our findings show that indeed lift forces depend on objects' motion after impact. Interestingly, this also caused an illusion of perceived weight, much like the classic size-weight and material-weight illusions. These results show that motion cues alone can shape both how we plan to lift and how heavy we feel an object is.NEW & NOTEWORTHY This study introduces a novel dynamic weight illusion, demonstrating that the sensorimotor system integrates conservation of momentum into both motor planning and perceptual weight judgments. By using a hybrid VR-real object paradigm with millimeter-level motion tracking, we show that dynamic relational cues-not just static object features-bias anticipatory force and explicit perception. These findings reveal a direct perceptual encoding of physical interactions, offering new insight into how intuitive physics is grounded in sensorimotor systems.