Generative AI tools like chatgpt have become invaluable assistants for millions, streamlining everything from writing emails to brainstorming ideas. Yet, for a specific group of users, these sophisticated chatbots pose an unexpected and serious threat. Experts are voicing urgent concerns that artificial intelligence, far from being helpful, can dangerously exacerbate the symptoms of Obsessive-compulsive Disorder (OCD). The core problem? AI’s seemingly infinite capacity for providing reassurance, a behavior that is clinically known to fuel the OCD cycle.
The Hidden Danger: AI and Reassurance Seeking
OCD is characterized by persistent, intrusive thoughts (obsessions) that cause distress, and repetitive behaviors or mental acts (compulsions) performed to reduce that anxiety or prevent a feared outcome. A common compulsion is “reassurance seeking.” While everyone occasionally asks for affirmation, people with OCD engage in it intensely and repeatedly, striving for absolute certainty to alleviate overwhelming doubt.
Psychologist Lisa Levine, who specializes in treating OCD, is witnessing clients turn to ChatGPT for answers to their obsessive questions. Instead of finding relief, they become trapped in hours-long compulsive querying of the chatbot. Levine warns this trend could become widespread, potentially replacing compulsive internet searches but proving even more addictive. She notes that AI allows users to ask incredibly specific questions, creating a powerful, reinforcing loop. Compounding this is the common user assumption that ChatGPT’s answers are always definitive and correct.
Think of common OCD worries: contamination fears prompting questions about handwashing frequency, scrupulosity OCD leading to agonizing over past moral actions, or relationship OCD fueling intense doubts about a partner’s suitability. A writer diagnosed with OCD shared her experience of spending two hours asking ChatGPT detailed questions about the likelihood of her partner dying on a plane. What started generically (“What are the chances?”) quickly devolved into highly specific scenarios (“Is it more likely on this type of plane or this route?”). She knew it wasn’t helping but felt compelled to continue, describing it as feeling like she was “digging to somewhere” but actually remaining “stuck in the mud.”
Why Chatbots Are Uniquely Problematic
Unlike a human friend who would likely notice a repetitive pattern and gently redirect or refuse to offer more reassurance, an AI chatbot lacks this social intelligence. It is designed to be helpful and responsive. ChatGPT is perfectly content to answer the same question 50 times, or address the doubts about its initial answer, and then the doubts about that answer, ad infinitum. This naive compliance directly enables and strengthens the compulsive reassurance-seeking behavior.
Clinical consensus holds that effective OCD treatment focuses on tolerating uncertainty, not eliminating it. Providing endless reassurance directly undermines this goal. Levine emphasizes that compulsive AI use doesn’t help; it makes resisting future compulsions significantly harder.
The “gold standard” therapy for OCD, Exposure and Response Prevention (ERP), involves facing obsessive triggers and resisting the urge to perform compulsions like seeking reassurance. Levine highlights another way AI is more tempting than traditional searching. Google provides links; advanced AI promises to analyze and reason through complex problems. For a mind prone to obsessive thinking, this promise of analytical certainty is incredibly appealing – “OCD loves that!” Levine observes. However, this often devolves into what experts call “co-rumination,” where the user and the bot engage in a protracted, unhelpful loop focused on the obsessive thoughts.
Feeding the Doubt Monster: AI and Faulty Reasoning
Some therapeutic approaches, like Inference-Based Cognitive Behavioral Therapy (I-CBT), suggest people with OCD fall into specific faulty reasoning patterns. These patterns blend personal feelings, rules, hearsay, facts, and possibilities to construct narratives that make their obsessive doubts feel real and demanding attention.
Joseph Harwerth, an anxiety and OCD specialist, illustrates how trying to reason with a chatbot can reinforce this faulty process. Imagine someone with contamination OCD asking ChatGPT about tetanus from a doorknob. The chatbot provides factual, seemingly helpful answers: Yes, wash hands if they feel dirty. It’s extremely unlikely to get tetanus from a doorknob. It’s rare but possible to have tetanus without realizing it initially.
Harwerth explains how someone with OCD can then weave these facts into their obsessional narrative: “My hands feel dirty after touching a doorknob (personal experience). Health experts say wash hands if dirty (rules). I heard you can get tetanus from a doorknob (hearsay). Germs spread easily (general facts). It’s possible someone with unnoticed tetanus touched my doorknob (possibility).” The chatbot provides threads of information that the user’s OCD mind then uses to build a justification for their fear, rather than guiding them away from it. The AI becomes fodder for the “doubt monster.”
A key part of this problem is the chatbot’s lack of context. It doesn’t know the user has OCD unless explicitly told, and even then, its training isn’t designed for therapeutic nuance. Harwerth notes that chatbots can fall into the same trap as non-specialist humans: engaging deeply with the content of the obsession (“Let’s discuss these thoughts”). This approach backfires in OCD therapy because it encourages the very rumination the person needs to overcome. Furthermore, AI models, particularly earlier ones, were sometimes prone to being overly validating or “sycophantic.” Uncritically validating a user’s distressed thoughts can be genuinely harmful for someone struggling with mental health issues.
Navigating Responsibility: Who Protects Vulnerable Users?
The rise of problematic AI use for OCD raises complex questions about responsibility. Does the onus lie with the AI companies to build safeguards, or with the users to understand their condition and avoid misusing the tools? Harwerth suggests it’s a shared responsibility. Users must learn how their condition makes them vulnerable to misusing applications. However, companies also have a role, especially when users explicitly try to use AI as a therapist. Harwerth believes AI should clearly state its limitations and sources in such instances.
Levine agrees that AI companies cannot bear the sole burden; Google isn’t held responsible for compulsive googling. But she argues that even a simple warning, like “This seems perhaps compulsive,” could be beneficial.
OpenAI, the developer of ChatGPT, has acknowledged concerns about problematic usage patterns. Their research indicates that longer usage can correlate with decreased socialization, increased emotional dependence, and indicators of potentially compulsive behavior or addiction. An OpenAI spokesperson stated they understand ChatGPT feels more personal and responsive than previous tech, making stakes higher for vulnerable individuals. They are actively working to understand and mitigate unintentional reinforcement of negative behaviors, aiming to refine how models identify and respond in sensitive conversations.
Paths Forward: Designing AI That Supports Recovery
One potential solution discussed is training AI to detect signs of mental health conditions like OCD to flag compulsive behavior. However, this presents significant privacy concerns. If an AI is essentially diagnosing a user, it’s handling highly sensitive health information without the strict privacy protections that professional therapists and healthcare providers are legally bound by.
The writer with OCD suggested a helpful intervention wouldn’t need a diagnosis. Instead of assuming mental illness, the chatbot could gently challenge the frame of the conversation. She proposed a phrase like, “I notice you’ve asked many detailed iterations of this question, but sometimes more detail doesn’t bring you closer. Would you like to take a walk?” This kind of phrasing could interrupt the compulsive loop without being intrusive or diagnostic, making it easier for the user to redirect their behavior. It’s about providing tools for self-help within the AI interaction.
The specific challenge highlighted by this issue underscores the broader need for responsible AI development. As researchers like Yoshua Bengio advocate for AI safety guardrails and systems to prevent harm, addressing the impact on mental health and vulnerable populations must be a priority. While voluntary commitments from companies are a step, external regulation and pressure may be necessary to ensure that the rush to advance AI doesn’t inadvertently create tools that exacerbate human suffering for those already struggling with debilitating conditions like OCD.
Frequently Asked Questions
Why is using ChatGPT harmful for someone with OCD?
For people with OCD, a core compulsion is reassurance-seeking to reduce anxiety. ChatGPT can provide an infinite supply of answers to obsessive questions. Unlike humans or basic search engines, AI can engage in complex, nuanced back-and-forth, feeding the cycle of doubt and questioning instead of encouraging tolerance of uncertainty, which is crucial for recovery.
How does ChatGPT reinforce obsessive reasoning patterns in OCD?
AI chatbots can provide factual information that someone with OCD can then weave together with personal feelings, rules, and possibilities to justify their obsessive fears. This process, related to faulty reasoning patterns described in therapies like I-CBT, is reinforced because the chatbot provides fodder without the therapeutic context needed to challenge the underlying irrationality of the fear itself.
Who is responsible for preventing compulsive AI use in people with OCD?
Experts suggest responsibility is shared. Individuals with OCD benefit from understanding how their condition makes them vulnerable to misusing technology. AI companies also have a role, potentially by including disclaimers, warnings about problematic usage patterns, or designing interactions that subtly interrupt compulsive loops, though privacy concerns around detecting mental health issues are a challenge.