Abstract
PURPOSE: Intraoperative liver deformation and the need to glance repeatedly between the operative field and a remote monitor undermine the precision and workflow of image-guided liver surgery. Existing mixed reality (MR) prototypes address only isolated aspects of this challenge and lack quantitative validation in deformable anatomy. APPROACH: We introduce a fully self-contained MR navigation system for liver surgery that runs on a MR headset and bridges this clinical gap by (1) stabilizing holographic content with an external retro-reflective reference tool that defines a fixed world origin, (2) tracking instruments and surface points in real time with the headset's depth camera, and (3) compensating soft-tissue deformation through a weighted ICP + linearized iterative boundary reconstruction pipeline. A lightweight server-client architecture streams deformation-corrected 3D models to the headset and enables hands-free control via voice commands. RESULTS: Validation on a multistate liver-phantom protocol demonstrated that the reference tool reduced mean hologram drift from 4.0 ± 1.2 mm to 1.1 ± 0.3 mm and improved tracking accuracy from 3.6 ± 1.3 to 2.3 ± 0.8 mm . Across five simulated deformation states, nonrigid registration lowered surface target registration error from 7.4 ± 4.8 to 3.0 ± 2.7 mm -an average 57% error reduction-yielding sub-4 mm guidance accuracy. CONCLUSIONS: By unifying stable MR visualization, tool tracking, and biomechanical deformation correction in a single headset, the proposed platform eliminates monitor-related context switching and restores spatial fidelity lost to liver motion. The device-agnostic framework is extendable to open approaches and potentially laparoscopic workflows and other soft-tissue interventions, marking a significant step toward MR-enabled surgical navigation.