Abstract
The rapid rise of bio-hybrid robots and hybrid human-AI systems has triggered an explosion of terminology that inhibits clarity and progress. To investigate how terms are defined, we conduct a narrative scoping review and concept analysis. We extract 60 verbatim definitions spanning engineering, human-computer interaction, human factors, biomimetics, philosophy, and policy. Entries are coded on three axes: agency locus (human, shared, machine), integration depth (loose, moderate, high), and normative valence (negative, neutral, positive), and then clustered. Four categories emerged from the analysis: (i) machine-led, low-integration architectures such as neuro-symbolic or "Hybrid-AI" models; (ii) shared, moderately integrated systems like mixed-initiative cobots; (iii) human-led, medium-coupling decision aids; and (iv) human-centric, low-integration frameworks that focus on user agency. Most definitions adopt a generally positive valence, suggesting a gap with risk-heavy popular narratives. We show that, for researchers investigating where living meets machine, terminological precision is more than semantics and it can shape design, accountability, and public trust. This narrative review contributes a comparative taxonomy and a shared lexicon for reporting hybrid systems. Researchers are encouraged to clarify which sense of Hybrid-AI is intended (algorithmic fusion vs. human-AI ensemble), to specify agency locus and integration depth, and to adopt measures consistent with these conceptualizations. Such practices can reduce construct confusion, enhance cross-study comparability, and align design, safety, and regulatory expectations across domains.