Abstract
A central challenge in artificial intelligence and cognitive science is identifying a unifying principle that governs inference, learning, and action. Active inference proposes such a principle: the minimization of variational free energy. Advocates of active inference argue that the framework subsumes classical models of optimal behavior-including Bayesian decision theory, resource rationality, optimal control, and reinforcement learning-while also instantiating information-theoretic principles such as rate-distortion theory and maximum entropy. However, the literature outlining these conceptual links remains fragmented, limiting integration across fields. This review develops these connections systematically. We show how these major frameworks admit formal correspondences with expected free energy minimization when expressed in variational form, exposing a shared optimization principle that underlies theories of optimal decision-making and information processing. This synthesis is intended both to orient researchers from other fields who are new to active inference and to clarify foundational assumptions for those already working within the framework.