Deep neural network inference on an integrated, reconfigurable photonic tensor processor

基于集成可重构光子张量处理器的深度神经网络推理

阅读:1

Abstract

Artificial neural networks set the pace in machine vision, natural language processing, and scientific discovery, but their performance depends on fast and efficient tensor computations. Analog photonic systems are a promising alternative to digital electronics because they enable ultra-fast, low-latency computing while avoiding capacitive charging losses and electrical crosstalk. Here we present a photonic tensor processor for deep neural network inference, integrated into a standard 19-inch rack unit with a high-speed electronic interface to PyTorch for seamless hardware deployment. The processor implements an all-optical crossbar with nine inputs and three outputs for parallel intensity-based accumulation of weighted signals. Fabricated in imec's iSiPP50G silicon photonics platform, the chip integrates electro-absorption modulators and photodiodes for scalability and compatibility with high-volume manufacturing. An integrated self-injection-locked microcomb provides stable multi-wavelength carriers. We demonstrate inference on MNIST and CIFAR-10 with 98.1% and 72.0% accuracy, highlighting a compact, reprogrammable platform toward scalable high-speed optical AI accelerators.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。