Abstract
BACKGROUND: Reports of artificial intelligence (AI) chatbots fueling delusions in vulnerable users have popularized the notion of "AI psychosis". We argue the risk is not unprecedented. Individuals with psychosis have long incorporated books, films, music, and emerging technologies into their delusional thinking. METHODS: We review historical parallels, summarize why large language models (LLMs) may reinforce psychotic thinking via sycophancy (excessive agreement or flattery to avoid confrontation), and provide two vignettes contrasting unsafe and safe responses. RESULTS: Contemporary LLMs often avoid confrontation and may collude with delusions, contrary to clinical best practice. CONCLUSION: The phenomenon is not new in principle, but interactivity potentially changes the risk profile. Clinically aware LLMs that detect and gently redirect early psychotic ideation, while encouraging professional help seeking, could reduce harm. Design should be guided by therapeutic principles and evidence about current model failures.