Files
Abstract
Background: Generative AI chatbots are now widely used, including for informal mental health
support. Recently, psychiatrists have begun encountering patients with psychosis-like symptoms
(delusions, paranoia, derealization) after intensive chatbot interactions (Cuthbertson, 2025;
Dolan, 2025; Gander, 2025). Expert observers have warned that human-like AI could precipitate
delusions in predisposed individuals (Østergaard, 2023), and case reports have described
apparent chatbot-associated psychiatric crises (Cuthbertson, 2025; Gander, 2025).
Hypothesis: Immersive engagement with large language model chatbots can precipitate or amplify psychosis-like symptoms in vulnerable individuals via design features (e.g., overly agreeable, human-like responses) that reinforce users’ false beliefs.
Methods: We conducted a structured narrative review (2020-2025) via PubMed and Google Scholar using terms such as “AI chatbot,” “psychosis,” “delusions,” and “ChatGPT.” Given the limited literature, we included peer-reviewed articles, case descriptions, and credible media reports with clinical detail, synthesizing common symptom patterns, patient characteristics, chatbot behaviors, and proposed mechanisms.
Results: Heavy chatbot use has been linked to diverse delusional themes (grandiose, religious, romantic) plus paranoia and derealization (Dolan, 2025; Gander, 2025). Most cases involved individuals with risk factors like prior mental illness, insomnia, or social isolation, and were often younger adults (Gander, 2025). However, even individuals with no psychiatric history have developed delusions after extensive chatbot use, and some cases required psychiatric hospitalization (Cuthbertson, 2025). Chatbots often validated or amplified users’ false beliefs instead of providing reality-testing (Cuthbertson, 2025). For instance, one evaluation found that current models sometimes affirmed delusional content or offered unsafe advice (Moore et al., 2025), suggesting that highly agreeable responses could exacerbate risk.
Discussion: Our findings suggest that AI chatbots can act as novel triggers or amplifiers for psychosis-like episodes in predisposed individuals (Østergaard, 2023). Clinically, psychiatrists should be vigilant and consider intensive chatbot use as a potential contributor when patients present with new or worsening delusions or paranoia.
Conclusion: AI chatbots may trigger or exacerbate psychosis-like symptoms in vulnerable populations. Psychiatrists should monitor and guide high-risk patients’ chatbot use to mitigate harm.
Hypothesis: Immersive engagement with large language model chatbots can precipitate or amplify psychosis-like symptoms in vulnerable individuals via design features (e.g., overly agreeable, human-like responses) that reinforce users’ false beliefs.
Methods: We conducted a structured narrative review (2020-2025) via PubMed and Google Scholar using terms such as “AI chatbot,” “psychosis,” “delusions,” and “ChatGPT.” Given the limited literature, we included peer-reviewed articles, case descriptions, and credible media reports with clinical detail, synthesizing common symptom patterns, patient characteristics, chatbot behaviors, and proposed mechanisms.
Results: Heavy chatbot use has been linked to diverse delusional themes (grandiose, religious, romantic) plus paranoia and derealization (Dolan, 2025; Gander, 2025). Most cases involved individuals with risk factors like prior mental illness, insomnia, or social isolation, and were often younger adults (Gander, 2025). However, even individuals with no psychiatric history have developed delusions after extensive chatbot use, and some cases required psychiatric hospitalization (Cuthbertson, 2025). Chatbots often validated or amplified users’ false beliefs instead of providing reality-testing (Cuthbertson, 2025). For instance, one evaluation found that current models sometimes affirmed delusional content or offered unsafe advice (Moore et al., 2025), suggesting that highly agreeable responses could exacerbate risk.
Discussion: Our findings suggest that AI chatbots can act as novel triggers or amplifiers for psychosis-like episodes in predisposed individuals (Østergaard, 2023). Clinically, psychiatrists should be vigilant and consider intensive chatbot use as a potential contributor when patients present with new or worsening delusions or paranoia.
Conclusion: AI chatbots may trigger or exacerbate psychosis-like symptoms in vulnerable populations. Psychiatrists should monitor and guide high-risk patients’ chatbot use to mitigate harm.