Abstract
Dual-LiDAR systems are widely deployed in autonomous driving, yet extrinsic calibration remains challenging in non-overlapping field-of-view (FoV) configurations where correspondence-based methods are unreliable. We propose an engineering-oriented 2.5D calibration framework that estimates horizontal extrinsics (x,y,yaw) via motion-guided planar alignment and then refines them using Gaussian Process Implicit Surfaces (GPIS), which provide continuous and probabilistic surface constraints from spatially disjoint scans. This design avoids calibration targets and reduces dependence on strong scene assumptions, improving robustness under noise and weak structure. Extensive high-fidelity simulation experiments demonstrate centimeter-level lateral accuracy and sub-degree yaw error, consistently outperforming representative motion-based and BEV-based baselines under both clean and noisy settings. To further assess real-world applicability, we conduct a preliminary nuScenes case study by splitting LiDAR scans into front and rear subsets to emulate a non-overlapping dual-LiDAR setup, achieving improved yaw accuracy and competitive lateral precision. Overall, the proposed method serves as a practical refinement stage for non-overlapping dual-LiDAR calibration, with a favorable balance of accuracy, robustness, and engineering feasibility.