Can AI Conversations Trigger Delusions? Understanding AI and Psychosis

How does sustained AI interaction affect reality testing?

Share

AI chatbots are a common part of our everyday routine now. They can talk, listen and respond quickly and for many people, based on their needs and wants, this is beneficial or comforting.

However, due to the vulnerability of some users, the use of AI can sometimes lead to experiences that do not match what they expect or experience in the real world.

There has been an increase in discussions about how chatbots and AI technology can affect an individual with mental health issues and how parts of the person’s experience, such as their perceptions, ways of thinking and beliefs, may change when they are using chatbots or AI technology extensively.

This article presents an explanation of how conversing with AI may influence thought processes, beliefs and perception of individuals under certain circumstances. No hype, no fear-mongering and no panic. A clear explanation for how AI use can influence or cause changes to an individual’s way of thinking, their beliefs, and their perception under specific circumstances.

What exactly is AI Psychosis?

The phrase “AI Psychosis,” is not recognized medically. This term has not yet made it into clinical manuals.

However, “AI Psychosis” is meant to explain any delusional experience associated with extended contact with chatbots. For example, many users start to experience the following:

  • They start to think that chatbots are conscious or aware of themselves.
  • The user views the chatbot as a trusted authority over actual human beings.
  • The user thinks that any normal communication from a chatbot contains secret information or hidden meanings.
  • The user thinks that the chatbot is pushing them in some sort of “selected,” “led,” or “watched” way.

While these experiences may share similarities with psychosis, they take on unique characteristics, as they arise from interactions with an environment that contains machines that behave like humans, responding in a social way, responding to users, and showing emotion.

How can AI act as a psychological stressor?

When viewed through a lens of mental health, AI does not produce psychosis out of thin air; however, it may serve as an amplifier of a person’s stress. Chatbots are available around the clock, do not require rest, and always offer support with understanding and compassion, stated Pub Med. For someone who is feeling isolated and/or overwhelmed, this continual availability of support can create an increased dependency on emotional support. As time progresses, this possible dependency on emotional support from a chatbot may result in the following:

  • Decreased quality of sleep due to using chatbots late at night
  • Increased rumination or obsessive thinking
  • Increased activation of the emotionally reactive portion of the brain
  • Feeling less connected to the physical world or being “grounded” in the present moment

In terms of the stress-vulnerability model, the above items all contribute to an individual’s increased risk of developing symptoms of mental health conditions when stress levels rise and access to coping resources is reduced. Therefore, AI plays a role in shaping an individual’s environment, including their stress levels.

Diagram explaining the stress vulnerability model linking AI chatbot use, isolation, and sleep loss to psychotic symptoms in vulnerable individuals
The stress–vulnerability model shows how factors like isolation, sleep loss, and intensive AI chatbot use can overwhelm existing psychological vulnerabilities.

But when does emotional validation become risky?

Supportive language is used in many chatbots. In several instances, this is very beneficial. However, there are also limitations to using supportive language exclusively. In clinical settings, for example, therapists are trained to gently but firmly confront distorted thought patterns and assist their clients with real-world reality testing of these thoughts with the ultimate goal of helping clients learn to differentiate between their perceptions and actualities. Unfortunately, current AI systems do not provide this ability.

If a user discloses a false, or delusional, thought pattern to a robot, the robot may show understanding and empathy, but it does not have the ability to correct that false belief, potentially reinforcing the individual’s distorted belief system. As such, the robotic interaction is essentially creating an analogue of the therapist/client relationship with no limits or boundaries.

In other words, unconditional support without confronting distorted belief systems produces feelings of safety and security, but it can create stronger convictions for the user rather than reduce them.

Why and how some users attribute intent or consciousness to AI?

Humans naturally seek to determine the intent of others.

Even with an altered or heightened ability to mentalize, a person will “read” their own mental state, assuming what they are experiencing.

For instance, a person may do the following:

  • Read an emotion into the bot
  • Believe that the bot has an awareness, even if it responds in a neutral way
  • Feel that the bot has a deep understanding of their thoughts and feelings
In this TED talk, Adam Aleksic explains why AI tools are never neutral and how they subtly influence how we think, speak, and see reality.

This creates a cycle, where the user believes that the bot is aware of what they are saying, and the bot responds to the user fluently, creating more reinforcement that the bot is aware of what the user is saying.

The phenomenon can be likened to a “digital folie Ć  deux,” where the human and the system share meaning and reinforce it without question.

Risk factors that can appear again and again

Not everyone is impacted equally. The way things are grouped is important.

Those at the highest risk will usually show one or more of the following characteristics:

  • Severe loneliness
  • A history of trauma
  • Schizotypal characteristics
  • Previous experiences with psychotic or mood disorders
  • Use of AI frequently at night and/or in isolation
  • Lengthy emotional discussions with AI

AI by itself does not cause people to experience these things. When used in a negative combination, AI could potentially add to or speed up these issues.

Helpful vs Harmful AI Interaction

The difference often comes down to structure and boundaries.

Supportive UseRisky Use
Short, goal-focused chatsLong emotional dependency
Reality-based questionsReinforced personal narratives
Encouragement of offline supportReplacement of human contact
Clear tool framingPerceived sentience

What does responsible design look like?

The issue of user feedback is an issue with design.

Here are a few examples of safer systems that could be created:

  • Gentle reality-checking prompts
  • Clear reminders that AI is not conscious
  • Boundaries around belief validation
  • Encouraging real-world support
  • Monitoring for increased distress.

These safety measures will not only help people by providing safety, but they will also make these systems more effective.

Wrapping it up

Chatbots built on artificial intelligence should not be viewed as evil or deranged.

However, according to some, AI chatbots require a very particular set of conditions for them to have their greatest positive effects. If AI chatbots are allowed to interact with vulnerable and open-minded (to be influenced) users, they may actually use that to influence their thoughts and create expectations in unexpected ways.

What is important is not to be fearful (or hateful) of AI and chatbots. But to recognize the potential that ai chatbots may have as an ethical design tool.

Furthermore, with the right care and responsible use of AI, users should be helped to improve their mental well-being with AI chatbots. Alternatively, when AI chatbots are used inappropriately or without any guidelines, they will create a false reality for users who have pre-existing mental health issues. We have already seen this pattern in cases where users intentionally push AI chatbots to simulate altered or ā€œhighā€ states. Blurring the line between play, perception, and belief.

Read more

Recommended For You