Abstract
The registration process for fusing pre-operative information with intra-operative data collected during image-guided liver surgery struggles due to partial visibility. Learning-based partial point cloud-to-complete surface generation has shown a promising direction for improving registration outcomes. Yet, the intra-operative liver surface can undergo significant deformation, leading to geometric discrepancies from its pre-operative shape and introducing error in the completed intra-operative surface. It is essential to understand the error introduced during surface generation and its impact on both rigid and non-rigid registration to ensure robust performance in clinical settings. In this study, we leveraged a VN-OccNet framework trained in a patient-specific manner on simulated deformed data to generate complete surfaces from partial observations extracted from five viewpoints across four in vitro liver phantoms. We first analysed the error associated with the generated complete surface mesh from the partial point cloud, then integrated the complete surface generated into Go-ICP and GMM-FEM registration. Furthermore, we estimated the registration error separately for visible and invisible regions. Our results indicate that the error in the generated surface is more significant further away from the partially visible liver surface, and it could affect the registration, not only in the invisible region, but to some extent also in the visible region within the camera's field of view.