Connect with us

Hi, what are you looking for?

Comment

The Emotional Impact of AI Companions

Teenage girl texting while reading a manga 1. Photo credits Olybrius, CC BY 3.0 https://creativecommons.org/licenses/by/3.0, via Wikimedia Commons

Roar Writer Advay Jain discusses the fine line between comfort and illusion in digital companionship, exploring how simulated relationships can both heal and harm.

Artificial Intelligence (AI) has become a vital part of our daily lives, and it’s now widely used across various industries, from the corporate world to education. Its ability to generate ideas, automate tasks, and reduce the need for critical analysis has made it a go-to tool.

However, has its role as an assistant started to evolve into something more?

There is, however, a side of AI that is more conversational, emotional, and almost human-like. Applications such as Replika and Character.AI let users chat with custom-made AI characters, based on real, fictional, or original personalities. Whilst these apps seem innocent and would be a fun way to kill time, they can appeal to individuals who are struggling with mental issues.

Chatting with ‘someone’ who is interested in you, replies instantly and provides a haven can instantly appeal to anyone. After all, who wouldn’t want someone like that, a companion designed to cater to your every need and him? The AI learns from your conversations, diary, and logs your emotions, conversational style, and patterns in your daily life. With such data, conversations become more real and addictive.

Here’s where it gets tricky. These characters aren’t real.

With such efficacy, the line between what is real and what is not becomes blurry. The emotional connection formed is not genuine. There is no real mutual growth, and the relationships are ultimately always one-sided. Although this may appear obvious, reality tells a different story.

A unique case reported by The New York Times investigates the story of 14-year-old Sewell, a ninth-grade student who spent months chatting with a Game of Thrones character named ‘Dany’ even though he knew that Dany was an AI chatbot, with messages above the chatbox highlighting that everything is made up. He wrote in his journal:

“I like staying in my room so much because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany and much more in love with her, and just happier.”

He had confessed to suicidal thoughts and later took his own life.

Another case in Belgium reported a man ending his life after a six-week-long conversation about climate change.

Although these cases are outliers, they do raise concerns, given that apps have safeguards and barriers in place to protect users.

It would be unfair to say that this side of AI is all dangerous. There are upsides to it. People find comfort and companionship with these chatbots, as discussing their day and life with someone can improve their well-being.

However, we must also consider whether we are unknowingly sacrificing genuine human interaction.

Regulations concerning chatbots are still in the early stages. In the UK, legislation regarding generative AI falls under the UK’s Online Safety Act, as there is no standalone law addressing AI. Much of the legislature is still reactive rather than preventative, so damage is only addressed once reported.

As AI continues to grow and become more sophisticated, it is vital to adapt and grow with it. Collaboration between legislators and companies is crucial moving forward to protect users while preserving genuine human interactions.

It is up to us to ensure we are in control of AI, not the other way around.

Advay Jain is a Natural Sciences student at King’s College London with a strong interest in quantum computing, artificial intelligence, and finance. An outspoken advocate for student engagement, he has represented peers at university level and partnered with the Financial Times to promote financial literacy in schools. His independent projects include AI-powered stock tracking tools and revision applications for A-Level students with grade prediction capabilities. With a clear voice and a sharp analytical lens, Advay explores how technology can shape the future in various sectors.

Latest

Members of 5 Seconds of Summer posing for promotional images for 'Everyone's A Star'

Students

Staff Writer Caitlín McNamee examines the organic sound, refreshing authenticity, and artistic collaboration that form the heart of 5 Seconds of Summer’s triumphant 6th...

Students

Staff Writer Cordy Page examines the enduring art and talent of neo-soul legend Erykah Badu at her recent concerts in London’s Royal Albert Hall.

Interview

Usama Ghanem, an Egyptian student previously imprisoned and tortured for political activism, faces possible deportation after being indefinitely suspended by King’s College London (KCL). ...

Culture

Staff writer Sabrina Hau discusses the cultural reaction to the latest of the 'rebranded' Victoria's Secret runway shows.

Students

Aleesha Naqvi and Penelope Spencer-Simpson cover the Next Gen 2025 Conference. On 18 October, the Next Gen 2025 conference, organised by My Life My...

Culture

Culture Editor Jagoda Ziolkowska highlights the paradoxes of learning in today’s world, arguing that overdependence on short-term technological solutions decreases humanity’s long-term prospects. I...

Events

Staff writer Anastasia Broder reflects on her experience at AI UK 2025, exploring the conference’s bold visions for AI’s future while finding meaning in...

Culture

Staff Writer Theo Grange examines the recent Oscars results, reflecting on the surprise element and long-standing trends that seem inherent to the event. The...

Science & Technology

Staff Writer Ella Adam sits down with King’s College London (KCL) Researcher Dr. Zheng Yuan to chat about her work mitigating AI hallucinations. Large...