Connect with us

Hi, what are you looking for?

Comment

The Emotional Impact of AI Companions

Teenage girl texting while reading a manga 1. Photo credits Olybrius, CC BY 3.0 https://creativecommons.org/licenses/by/3.0, via Wikimedia Commons

Roar Writer Advay Jain discusses the fine line between comfort and illusion in digital companionship, exploring how simulated relationships can both heal and harm.

Artificial Intelligence (AI) has become a vital part of our daily lives, and it’s now widely used across various industries, from the corporate world to education. Its ability to generate ideas, automate tasks, and reduce the need for critical analysis has made it a go-to tool.

However, has its role as an assistant started to evolve into something more?

There is, however, a side of AI that is more conversational, emotional, and almost human-like. Applications such as Replika and Character.AI let users chat with custom-made AI characters, based on real, fictional, or original personalities. Whilst these apps seem innocent and would be a fun way to kill time, they can appeal to individuals who are struggling with mental issues.

Chatting with ‘someone’ who is interested in you, replies instantly and provides a haven can instantly appeal to anyone. After all, who wouldn’t want someone like that, a companion designed to cater to your every need and him? The AI learns from your conversations, diary, and logs your emotions, conversational style, and patterns in your daily life. With such data, conversations become more real and addictive.

Here’s where it gets tricky. These characters aren’t real.

With such efficacy, the line between what is real and what is not becomes blurry. The emotional connection formed is not genuine. There is no real mutual growth, and the relationships are ultimately always one-sided. Although this may appear obvious, reality tells a different story.

A unique case reported by The New York Times investigates the story of 14-year-old Sewell, a ninth-grade student who spent months chatting with a Game of Thrones character named ‘Dany’ even though he knew that Dany was an AI chatbot, with messages above the chatbox highlighting that everything is made up. He wrote in his journal:

“I like staying in my room so much because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany and much more in love with her, and just happier.”

He had confessed to suicidal thoughts and later took his own life.

Another case in Belgium reported a man ending his life after a six-week-long conversation about climate change.

Although these cases are outliers, they do raise concerns, given that apps have safeguards and barriers in place to protect users.

It would be unfair to say that this side of AI is all dangerous. There are upsides to it. People find comfort and companionship with these chatbots, as discussing their day and life with someone can improve their well-being.

However, we must also consider whether we are unknowingly sacrificing genuine human interaction.

Regulations concerning chatbots are still in the early stages. In the UK, legislation regarding generative AI falls under the UK’s Online Safety Act, as there is no standalone law addressing AI. Much of the legislature is still reactive rather than preventative, so damage is only addressed once reported.

As AI continues to grow and become more sophisticated, it is vital to adapt and grow with it. Collaboration between legislators and companies is crucial moving forward to protect users while preserving genuine human interactions.

It is up to us to ensure we are in control of AI, not the other way around.

About the author

Latest

Science & Technology

On 25 October 2020, National Aeronautics and Space Administration’s (NASA) Curiosity rover collected three rock samples on Mars. Now, nearly six years later after vigorous...

Science & Technology

As of 9 April 2026, the International Union for Conservation of Nature (IUCN) Red List of Threatened Species have marked the emperor penguin, alongside the...

encampment tents encampment tents

News

King’s College London (KCL) spent £37,162 on legal fees for protest-related matters between January 2024 and January 2026, a Freedom of Information request has...

Science & Technology

Inspire The Mind (ITM) is a Denmark Hill based magazine which focuses on mental health. Founded in 2019, ITM is a collaboration of researchers...

Features

Staff writer Billy Nunn reflects on the Science Gallery London with its expected closure.

Comment

Staff Writer Kaya Newhagen explores how two firms came to shape a technology no government is yet equipped to govern. Ask ChatGPT or Claude...

Comment

Comment Editor Deborah Solomon unravels the pervasive and unthinking use of AI and the ecological and societal injustice propping this market up. On a...

News

On 29 January, the King’s College London (KCL) set out an outline of goals to be undertaken as part of King’s Strategy 2030, an...

Science & Technology

In January 2026, the King’s College London Student Union (KCLSU) published its AI Manifesto. Throughout the first semester of this academic year (2025/26), students...