When AI Starts to Feel Like Someone

On awareness, comfort, and emotional boundaries

By Sugirdha

When AI Starts to Feel Like Someone Photo by Lisa from Pexels

It came up casually; the way little important things often do..

In the middle of an ordinary conversation with someone going through a particularly heavy period in their life, they mentioned talking to AI when they were feeling overwhelmed. Not for answers or research, but simply to talk. To say things out loud without worrying about being judged. AI responded calmly, with language that seemed reassuring and thoughtful. It didn’t interrupt. It just… was there.

It surfaced again not very long after. A different setting, and completely different life stage, but similar details. A quiet admission of not being heard, and maybe a little boredom too. Opening a chat window felt like a natural alternative to trying hard. Again, the responses were always there. Attentive, patient, and seemingly understanding.

And I heard it a third time. Another conversation, another context, same pattern. Turning to software for reassurance, validation, or simply to be heard.

In each of these cases, I found myself explaining how LLMs work: that the companionship they’ve been receiving wasn’t, as it feels, a result of understanding or intent, but of patterns carefully trained on vast amounts of text. And each time, their reaction was the same: surprise. And maybe a subtle shift in how they saw their new companion.

My intention wasn’t to persuade them not to use AI, but to help them be aware of the software for what it is, and what it is not. I want people to use these tools well, to benefit from what they are designed to offer, not allowing the models to shape or cloud their judgement of real life.

These experiences left me with a question I haven’t been able to shake off: What is it about these systems that make them feel safe to talk to, to open your heart to in a way sometimes you cannot with people around you?

I can’t say that I haven’t observed these very human-like expressions that modern LLMs can produce, especially sarcasm and empathy. But as someone working in tech, I’m always aware that I’m interacting with software, no matter how human the responses sound. When I’m running through ideas, I have to be deliberate in my prompting, so as not to get responses that are overly positive or biased, but to generate practical, grounded results instead.

AI itself is not the problem. Like most tools, it depends on how intentionally we use it and how clearly we understand what it really is. What matters here is awareness. These systems can be our helpers, and support us with brainstorming and productivity, as long as we do not let them take on an emotional role that they were never designed to hold.

Tags: AI reflection
Share: LinkedIn