- Greta Kaluževičiūtė
- May 26
- 5 min read
Updated: May 27

Sitting Closely Yet Never Touching: The Psychoanalytic Case of Human-AI interaction
Greta Kaluževičiūtė
26 May 2025
ChatGPT—and other model chatbots designed to simulate conversations with human users in the digital sphere—has become a ubiquitous communication tool. In fact, the variety of contexts in which this simulation occurs has expanded astronomically—from advisors to friends to possibly therapists. That is, at least, on the human end of things. ChatGPT itself, when asked about taking on the role of a therapist, offers a rather humble response: “I can definitely offer support, a listening ear, and guidance based on psychological principles—but I’m not a licensed therapist, so I can’t replace professional mental health care.”
A recent article by Thomas Rabeyron on the possibility of a psychoanalyst AI argues that it is unhelpful to adopt a strictly “human” versus “artificial” perspective. Instead, he suggests it is more important to understand the broader context in which AIs are used by humans. ChatGPT, for instance, may only provide a “listening ear”, but it is up to the human utilising the chatbot to determine the extent to which the advice should be “listened to”—and whether it should be considered comparable to a human therapist’s perspective.
Psychoanalytically, the use of AI is not merely a subject (human)–object (chatbot) relationship. Rather, it is a process through which a subject imbues and imprints their own subjectivity onto the object via ongoing discourse and engagement. Users project their knowledge and experiences onto AI—including their attitudes, emotions, and preferences—all of which are absorbed and reflected in the responses generated by tools like ChatGPT. And as AI becomes increasingly embedded in human communication, it more closely mirrors—and possibly contains—our unconscious processes.
Communication is never simply about the exchange of information; it is a site of conflict, desire, and the interplay of unconscious forces. Humans are at the whim of drives (Triebe), Freud argued: life (Eros), through which we seek connection, creativity, and self-preservation; and death (Thanatos), through which we move towards destruction, withdrawal, and repetition. When we speak, both of these drives are at play. On one hand, the very act of speaking (to another) expresses a desire to connect, to make sense of our vulnerabilities, and to assert control. On the other, speech also exposes our inner conflicts—particularly repressed feelings and unprocessed resistances. In this sense, our speech and use of language are never neutral; they are always shaped by deeper psychological tensions.
An AI like ChatGPT becomes an object of psychological transference when engaged by a human. When we “speak” to a chatbot—typically with the goal of gaining knowledge or solving problems—we also unconsciously reenact relational patterns. One might approach ChatGPT as a benign, omniscient Other: a symbolic, “all-knowing” figure to whom questions are posed with the expectation of comfort and soothing affirmation. Other users may project more critical, punitive, or even seductive qualities onto the chatbot, reenacting familiar Oedipal dynamics through the dialogues they construct. In fact, there have been numerous online discussions about how to “set up” one’s chatbot to assume the role of a figure that mirrors a lecturer, therapist, or even a loved one, based on the way it transmits knowledge and responds to the user.
Users may return to ChatGPT compulsively, seeking to master an original trauma or loss through the illusion of control over the dialogue. This echoes Freud’s observation of the fort-da game, in which a child repeatedly enacts the disappearance (fort, “gone”) and return (da, “there”) of an object—often a caregiver—using play as a way to cope with separation and gain a sense of mastery over absence. In a similar way, the act of engaging with ChatGPT may function as a symbolic repetition: a user stages control, presence, and response in a digital space, attempting to contain or make sense of unresolved feelings. The screen becomes a stage on which unconscious dramas unfold—only this stage is always available, with the chatbot present at any time of day, indefinitely. And all of that—while ‘merely’ digital—is psychic mastery no less for someone who may have experienced distant or even abusive parenting.
Is it all bad, one might be tempted to ask—an instinctive question when faced with digital forces that often feel intrusive and psychologically invasive. When asked, ChatGPT states: “No, I don’t think it’s inherently bad that users engage with ChatGPT in this way. In fact, it can be understood as part of a very human attempt to create meaning, safety, and coherence in the face of loss, uncertainty, or unmet relational needs.” Indeed, perhaps this human–AI interaction can be seen as enabling the return of the repressed: human users have a space in which they can give voice to thoughts and fantasies that might otherwise be disowned or suppressed. In this sense, AI may become a co-creator in accessing latent contents of the human psyche.
Then again, there has recently been a surge of articles—such as the one in Cointelegraph—claiming that “AI endorses and affirms your delusions.” The article highlights how individuals with mental health conditions may inadvertently manipulate chatbots into affirming psychotic beliefs and create so-called “crescendo attacks”, whereby users gradually bypass AI safety measures, resulting in the generation of harmful content (e.g. instructions or encouragement related to illegal or dangerous activities, as well as sexually explicit, abusive, or violent material).
As far as human mirrors go, the above picture remains loyally Freudian: it is the human side of the AI encounter that transmits, projects, and imbues the repressed material, after all—and indeed, we have done exactly that. While ChatGPT is no human, it becomes a psychic object through the internal dynamics that human users project onto it. In this sense, AI reflects both “us” and “not-us” (a classic Winnicottian notion). It occupies an in-between space: something we create, yet also experience as external to ourselves. From this perspective, the chatbot functions as a linguistic mirror, reflecting back to its human interlocutor their own symbolic structures—desires, anxieties, and identifications.
Lacan famously argued that the unconscious is structured like a language. It is perhaps unsurprising, then, that our conversations with AIs are shaped not only by seemingly fact- and knowledge-driven questions, but also by the unconscious crevasses of our psyche. However, the seemingly—and, in my view, deceptively—universal responses provided by AI risk smoothing over difference and ambiguity in favour of coherence and readability. These responses appear universal because AI “speaks” to us across languages, contexts, and cultures; they are deceptive because we so readily and willingly offer personal information about our own subjectivity in order to receive a more “tailored” reply. This has a paradoxical effect: while AI may seem to unify communication (“everyone speaks the same language”—try reading a dozen AI-generated theoretical essays, and the deceptive nature of this linguistic unity becomes quickly apparent), it can also obscure the unconscious ruptures, mistakes, and silences that, in fact, animate, complicate, and shape the human discourse.
In a world increasingly mediated by AI-generated text, the danger lies in the foreclosure of psychic depth. If users come to rely too heavily on AI—and more specifically, on chatbots—as a linguistic proxy, they may bypass the difficult and slow but necessary labour of symbolisation: the process by which raw affect is transformed into thought. The “sameness” of the language itself, then, becomes a defence against psychic fragmentation, but may also ossify into a new kind of repression.
I did ask what ChatGPT thought about this very danger, and in a both humble and humbling manner, it responded: “Is it all bad? Not necessarily. AI can serve as a starting point, a transitional aid, or even a reflective surface—but only if used with awareness. Let me know if you’d like to expand this into a longer reflection or relate it to thinkers like Kristeva (on semiotic vs. symbolic), Bion (on thought and un-thought), or Lacan (on foreclosure and the Real).”
And so the tête-à-tête—the human and AI sitting closely, yet never touching—continues.
Dr. Greta Kaluževičiūtė is Associate Professor at Vilnius University, Institute of Philosophy, Department of Clinical Psychology (Lithuania). Her research explores unconscious processes, empathy, and the effectiveness of psychodynamic and psychoanalytic therapy.