Physical AI goes to the operating room: are we ready for the Surgical Data Factory?

物理人工智能进入手术室:我们准备好迎接手术数据工厂了吗?

阅读:1

Abstract

The operating room remains a paradox: it is one of the most sensor-rich environments in the hospital, yet it produces largely underutilized data. While surgical artificial intelligence (AI) has achieved remarkable progress in recent years, the day-to-day practice of surgery has changed little, with most systems confined to passive decision support. This narrative review traces the evolution of surgical AI from perception to cognition to early forms of action, arguing that the next paradigm shift requires "physical AI"-systems capable of meaningful physical interaction and autonomous execution. The clinical motivation for pursuing physical AI is clear: surgical outcomes vary substantially across surgeons, access is constrained by workforce shortages, and high-quality care remains tied to the scarcity of human expertise. If reliable autonomous systems can be developed, surgery could become more standardized, scalable, and reproducible. However, a critical bottleneck persists: the scarcity of synchronized, multimodal training data. The fundamental barrier is environmental rather than algorithmic, as most operating rooms are not configured to measure surgical practice objectively. We propose reconceptualizing the operating room as a "Surgical Data Factory"-a closed-loop ecosystem designed to capture multimodal signals, structure them via consensus taxonomies linked to outcomes, and utilize them for training, validation, and monitoring. Surgeons must transition from passive users to active architects of this infrastructure. Investing in systematic data governance is the prerequisite for responsibly developing, validating, and scaling physical AI in surgery.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。