Abstract
This article analyses ideas to use AI-supported systems to counter 'cognitive warfare' and critically examines the implications of such systems for fundamental rights and values. After explicating the notion of 'cognitive warfare' as used in contemporary public security discourse, the article describes the emergence of generative AI tools that are expected to exacerbate the problem of adversarial activities against the online information ecosystems of democratic societies. In response, researchers and policymakers have proposed to utilize AI to devise countermeasures, ranging from AI-based early warning systems to state-run content moderation tools. These interventions, however, interfere, to different degrees, with fundamental rights and values such as privacy, communication rights, and self-determination. This article argues that such proposals insufficiently account for the complexity of contemporary online information ecosystems, particularly the inherent difficulty in establishing causality and attribution. Reliance on the precautionary principle might offer a justificatory frame for AI-enabled measures to counter 'cognitive warfare' in the absence of conclusive empirical evidence of harm. However, any such state intervention must be based in law and adhere to strict proportionality.