A Multi-Step Grasping Framework for Zero-Shot Object Detection in Everyday Environments Based on Lightweight Foundational General Models

基于轻量级基础通用模型的日常环境零样本目标检测多步抓取框架

阅读:1

Abstract

Achieving object grasping in everyday environments by leveraging the powerful generalization capabilities of foundational general models while enhancing their deployment efficiency within robotic control systems represents a key challenge for service robots. To address the application environments and hardware resource constraints of household robots, a Three-step Pipeline Grasping Framework (TPGF) is proposed for zero-shot object grasping. The framework operates on the principle of "object perception-object point cloud extraction-grasping pose determination" and requires no training or fine-tuning. We integrate advanced foundational models into the Object Perception Module (OPM) to maximize zero-shot generalization and develop a novel Point Cloud Extraction Method (PCEM) based on Depth Information Suppression (DIS) to enable targeted grasping from complex scenes. Furthermore, to significantly reduce hardware overhead and accelerate deployment, a Saturated Truncation strategy based on relative information entropy is introduced for high-precision quantization, resulting in the highly efficient model, EntQ-EdgeSAM. Experimental results on public datasets demonstrate the superior inspection generalization of the combined foundational models compared to task-specific baselines. The proposed Saturated Truncation strategy achieves 3-21% higher quantization accuracy than symmetric uniform quantization, leading to 3.5% model file compression and 95% faster inference speed for EntQ-EdgeSAM. Grasping experiments confirm that the TPGF achieves robust recognition accuracy and high grasping success rates in zero-shot object grasping tasks within replicated everyday environments, proving its practical value and efficiency for real-world robotic deployment.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。