Coarse-to-Fine Contrast Maximization for Energy-Efficient Motion Estimation in Edge-Deployed Event-Based SLAM

边缘部署的基于事件的SLAM中,粗到细对比度最大化以实现节能的运动估计

阅读:1

Abstract

Event-based vision sensors offer microsecond temporal resolution and low power consumption, making them attractive for edge robotics and simultaneous localization and mapping (SLAM). Contrast maximization (CMAX) is a widely used direct geometric framework for rotational ego-motion estimation that aligns events by warping them and maximizing the spatial contrast of the resulting image of warped events (IWE). However, conventional CMAX is computationally inefficient because it repeatedly processes the full event set and a full-resolution IWE at every optimization iteration, including late-stage refinement, incurring both event-domain and image-domain costs. We propose coarse-to-fine contrast maximization (CCMAX), a computation-aware CMAX variant that aligns computational fidelity with the optimizer's coarse-to-fine convergence behavior. CCMAX progressively increases IWE resolution across stages and applies coarse-grid event subsampling to remove spatially redundant events in early stages, while retaining a final full-resolution refinement. On standard event-camera benchmarks with IMU ground truth, CCMAX achieves accuracy comparable to a full-resolution baseline while reducing floating-point operations (FLOPs) by up to 42%. Energy measurements on a custom RISC-V-based edge SoC further show up to 87% lower energy consumption for the iterative CMAX pipeline. These results demonstrate an energy-efficient motion-estimation front-end suitable for real-time edge SLAM on resource- and power-constrained platforms.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。