Deploying TinyML for energy-efficient object detection and communication in low-power edge AI systems

在低功耗边缘人工智能系统中部署 TinyML,以实现节能的目标检测和通信

阅读:1

Abstract

Edge Artificial Intelligence (Edge AI) is driving the widespread deployment of neural network models on resource-constrained microcontroller units (MCUs), enabling real-time, on-device data processing. This approach significantly reduces cloud dependency, making it ideal for applications in industrial automation and IoT. However, the deployment of deep learning models on such constrained devices poses significant challenges due to limitations in memory, computational power, and energy capacity. This paper presents a real-time object detection system optimized for energy efficiency and scalability, which integrates well-established model compression techniques, such as quantization, with a low-cost MCU-based platform. The system leverages MobileNetV2, a lightweight neural network, quantized to achieve the best trade-offs between accuracy and resource consumption. The proposed solution integrates a camera and Wi-Fi module for capturing and transmitting image data, utilizing dual-mode TCP/UDP communication to balance reliability and low-latency transmission for IoT applications. We present a comprehensive system-level analysis, exploring the trade-offs between latency, memory, energy consumption, and model size. The Visual Wake Words (VWW) dataset is used for this research, which demonstrates the practical performance and scalability of the system for real-time applications in smart devices, industrial monitoring, and environmental sensing. This work emphasizes the integration of TinyML models with constrained hardware and offers a foundation for scalable, autonomous, energy-efficient Edge AI solutions. Quantitatively, 8-bit post-training quantization achieved 3-[Formula: see text] storage reduction, yielding deployable flash footprints of 286-536 KB within a 1 MB flash / 256 KB SRAM budget, on-device inference latency ranged from 3.47 to 14.98 ms per frame with energy per inference of 10.6-22.1 J, while quantized MobileNet variants maintained accuracy [Formula: see text]. In wireless reporting, UDP reduced one-way latency relative to TCP, whereas TCP provided higher delivery reliability, underscoring application-dependent protocol trade-offs for real-time embedded deployments.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。