Abstract
Real-time registration of preoperative 3D liver models to intraoperative 2D laparoscopic images is essential for augmented reality navigation in minimally invasive liver surgery. However, 3D-2D registration typically depends on anatomical landmarks extraction and pose estimation based on iterative projection-based landmark distance computation, which is time-consuming. Unlike iterative pose refinement strategies, our method treats liver pose estimation as a partial-to-complete point matching problem. First, our method leverages monocular depth estimation to reconstruct partial intraoperative point clouds from a single RGB image. Then, a two-stage point matching framework establishes dense 3D-3D correspondences, ultimately inferring the 6-DoF rigid pose by solving a weighted SVD over the matched point pairs. The experiments on the P2ILF dataset have a reprojection error of 126.37 ± 48.98 pixels and a target registration error of 25.20 mm on the LLR-LUS dataset. These results indicate that our method achieves promising accuracy and efficiency in aligning preoperative models to intraoperative scenes, suggesting its potential for practical rigid alignment in near real-time laparoscopic liver AR navigation.