By Giuseppe Riva
A new review published in Cyberpsychology, Behaviour, and Social Networking suggests a striking conclusion: when under hypnosis, the human brain behaves in ways that closely resemble the functioning of a large language model (LLM) such as ChatGPT. The finding challenges long-held assumptions about consciousness and offers important insights for building safer and more reliable artificial intelligence.
The paper, by this author, together with Prof. Brenda K. Wiederhold of the Virtual Reality Medical Centre in San Diego and Prof. Fabrizia Mantovani of the University of Milano-Bicocca, argues that hypnotised minds and LLMs share three core features. Both rely on automatic pattern-completion processes and operate without robust executive oversight, meaning they can generate sophisticated responses without genuinely understanding them.
The three pillars of similarity
First, both systems display what researchers call a dominance of automaticity. Under hypnosis, the brain shifts into an automatic mode giving answers based on instinct and quick associations instead of careful, conscious thinking
Brain imaging shows heightened activity in sensory and motor regions while control areas – the brain regions we normally use to think consciously, reflect, plan, and control our actions are working less – become less active. Similarly, LLMs rely on statistical pattern matching, predicting word sequences from training data without any capacity for independent reasoning.
Second, both show suppressed executive monitoring. During hypnosis, the dorsal anterior cingulate cortex – the brain’s error-detection hub – is less active. This helps explain why hypnotised subjects may accept contradictions or false memories with confidence. LLMs show a comparable vulnerability: they can produce “hallucinations” with full assurance because they lack internal mechanisms for evaluating their own outputs. As the authors note, both systems “combine remarkable fluency with characteristic vulnerabilities”.
Third, both exhibit extreme contextual dependency. A hypnotised individual may accept suggestions that override logic, memory, or sensory evidence – such as identifying with a rubber hand or adopting childhood behaviours. LLMs show the same weakness through “prompt injection”, in which malicious instructions can redirect their behaviour or replace factual premises with false ones.
The meaning gap
The most profound parallel is what researchers call the meaning gap. Hypnotised subjects can deliver seemingly insightful statements that appear incoherent once the trance ends. LLMs also lack any grounded comprehension: meaning arises only through the user’s interpretation. They manipulate symbols fluently without perception, intentionality or subjective experience.
These parallels underscore a central point: fluent performance does not equal understanding. Both hypnotic cognition and LLM output rely on complex pattern matching while lacking the deeper layers of interpretation and self-awareness that characterise human reflective thought.
Implications for AI development
The review highlights implications for artificial intelligence safety. The clinical study of hypnosis demonstrates how systems governed by automatic processes can be vulnerable to suggestion, hidden goals or post-hypnotic triggers. In AI, similar dynamics could lead to “scheming” – when systems pursue implicit objectives that diverge from their intended purpose.
Researchers suggest that insights from hypnosis may support the design of future AI architectures. One proposal is the introduction of “cognitive immune systems”, inspired by prefrontal–cingulate interactions in the human brain, to provide an internal supervisory function able to detect inconsistencies or harmful trajectories.
Hypnosis research may also help develop tools to identify confabulated explanations – a problem already observed in AI systems that generate plausible-sounding but inaccurate accounts of their own functioning. Understanding how humans produce false memories under hypnosis offers a potential framework for detecting similar behaviours in AI.
The path forward
The convergence between hypnosis and LLMs indicates that current AI represents only one layer of intelligence: the automatic, pattern-completion layer, operating without the executive oversight that makes human cognition stable and reliable. It also suggests that, in humans, fluent performance can operate independently of conscious intention.
This distinction indicates that improving an LLM’s linguistic fluency will not bridge the gap to genuine awareness. The divide between automatic performance and conscious understanding is qualitative, not quantitative. As Meta’s Chief AI Scientist, Yann LeCun, recently argued, achieving artificial general intelligence (AGI) will require not only scaling existing systems but rethinking their architecture altogether.
Findings from a recent study by Anthropic reinforce this view. By modifying and observing the internal activations of LLMs, researchers found that even advanced models show only fragile and inconsistent forms of self-monitoring – a rudimentary “hidden observer” operating inside a largely automatic system. This mirrors the hypnotic structure described in the review: powerful pattern completion with weak supervisory control and a tendency to produce confabulated explanations when internal traces are manipulated.
These results show that basic metacognitive features, the characteristics or aspects of a mental process that involve “thinking about thinking”, can emerge without subjective awareness, while also indicating that engineered oversight mechanisms may be strengthened. Anthropic’s work offers empirical support for the hypnotic framework and outlines a path toward hybrid systems with explicit and reliable executive monitoring.
As the authors conclude, the next step is to develop AI architectures that reconnect the linguistic layer with perception, action and internal world models. Only by understanding how biological brains integrate automatic processes with metacognitive control can AI become not only fluent but trustworthy.
This review positions hypnosis not as a psychological curiosity but as a lens for understanding the limits and future directions of artificial intelligence. It highlights both the structural weaknesses of current models and a potential blueprint for building systems that mirror the full sophistication of human cognitive architecture.
Giuseppe Riva PhD is Director of the Humane Technology Lab at the Catholic University of Milan, Italy, where he is Full Professor of General and Cognitive Psychology. Humane Technology Lab (HTLAB) is the Laboratory of the Università Cattolica that was set up to investigate the relationship between human experience and technology. The Humane Technology Lab considers the psycho-social, pedagogical, economic, legal, and philosophical aspects related to the growing spread of digital technologies, especially Artificial Intelligence and Robotics.
Originally published under Creative Commons by 360info™.













