Abstract
Redundancy is a central yet persistently ambiguous concept in multivariate information theory. Across the literature, the same term is used to describe fundamentally distinct phenomena. Operational redundancy concerns how different inputs relate to the prediction of output states, while informational redundancy concerns content overlap among inputs relevant to an output. These notions are routinely conflated in decompositions of mutual information, leading to incompatible definitions, contradictory interpretations, and apparent paradoxes-particularly when inputs are statistically independent. We argue that the difficulty in defining redundancy is not primarily technical, but conceptual: the field has not converged on what redundancy is meant to signify. We formalize this distinction by identifying two classes of redundancy. Operational redundancy encompasses task-relative properties and covers conditions when inputs are sufficient or substitutable for prediction. Informational redundancy concerns shared content among inputs, grounded in mutual information between them. Using functional examples and biased input ensembles, we demonstrate the practical distinction between these classes: inputs with no informational overlap can exhibit operational redundancy, while partial observation can induce statistical correlations that create content overlap without reflecting the underlying functional structure. We conclude by proposing a clear separation of these concepts and outlining minimal commitments for each. This separation clarifies why redundancy remains elusive, why no single measure can satisfy all intuitions, and how future work can proceed without redefining information itself.