Abstract
This paper proposes a multi-sensor data-fusion-based method for real-scene 3D reconstruction and digital twin visualization of coal mine tunnels, aiming to address issues such as low accuracy in non-photorealistic modeling and difficulties in feature object recognition during traditional coal mine digitization processes. The research employs cubemap-based mapping technology to project acquired real-time tunnel images onto six faces of a cube, combined with navigation information, pose data, and synchronously acquired point cloud data to achieve spatial alignment and data fusion. On this basis, inner/outer corner detection algorithms are utilized for precise image segmentation, and a point cloud region growing algorithm integrated with information entropy optimization is proposed to realize complete recognition and segmentation of tunnel planes (e.g., roof, floor, left/right sidewalls) and high-curvature feature objects (e.g., ventilation ducts). Furthermore, geometric dimensions extracted from segmentation results are used to construct 3D models, and real-scene images are mapped onto model surfaces via UV (U and V axes of texture coordinate) texture mapping technology, generating digital twin models with authentic texture details. Experimental validation demonstrates that the method performs excellently in both simulated and real coal mine environments, with models capable of faithfully reproducing tunnel spatial layouts and detailed features while supporting multi-view visualization (e.g., bottom view, left/right rotated views, front view). This approach provides efficient and precise technical support for digital twin construction, fine-grained structural modeling, and safety monitoring of coal mine tunnels, significantly enhancing the accuracy and practicality of photorealistic 3D modeling in intelligent mining applications.