Connect with us

Hi, what are you looking for?

Comment

The Emotional Impact of AI Companions

Teenage girl texting while reading a manga 1. Photo credits Olybrius, CC BY 3.0 https://creativecommons.org/licenses/by/3.0, via Wikimedia Commons

Roar Writer Advay Jain discusses the fine line between comfort and illusion in digital companionship, exploring how simulated relationships can both heal and harm.

Artificial Intelligence (AI) has become a vital part of our daily lives, and it’s now widely used across various industries, from the corporate world to education. Its ability to generate ideas, automate tasks, and reduce the need for critical analysis has made it a go-to tool.

However, has its role as an assistant started to evolve into something more?

There is, however, a side of AI that is more conversational, emotional, and almost human-like. Applications such as Replika and Character.AI let users chat with custom-made AI characters, based on real, fictional, or original personalities. Whilst these apps seem innocent and would be a fun way to kill time, they can appeal to individuals who are struggling with mental issues.

Chatting with ‘someone’ who is interested in you, replies instantly and provides a haven can instantly appeal to anyone. After all, who wouldn’t want someone like that, a companion designed to cater to your every need and him? The AI learns from your conversations, diary, and logs your emotions, conversational style, and patterns in your daily life. With such data, conversations become more real and addictive.

Here’s where it gets tricky. These characters aren’t real.

With such efficacy, the line between what is real and what is not becomes blurry. The emotional connection formed is not genuine. There is no real mutual growth, and the relationships are ultimately always one-sided. Although this may appear obvious, reality tells a different story.

A unique case reported by The New York Times investigates the story of 14-year-old Sewell, a ninth-grade student who spent months chatting with a Game of Thrones character named ‘Dany’ even though he knew that Dany was an AI chatbot, with messages above the chatbox highlighting that everything is made up. He wrote in his journal:

“I like staying in my room so much because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany and much more in love with her, and just happier.”

He had confessed to suicidal thoughts and later took his own life.

Another case in Belgium reported a man ending his life after a six-week-long conversation about climate change.

Although these cases are outliers, they do raise concerns, given that apps have safeguards and barriers in place to protect users.

It would be unfair to say that this side of AI is all dangerous. There are upsides to it. People find comfort and companionship with these chatbots, as discussing their day and life with someone can improve their well-being.

However, we must also consider whether we are unknowingly sacrificing genuine human interaction.

Regulations concerning chatbots are still in the early stages. In the UK, legislation regarding generative AI falls under the UK’s Online Safety Act, as there is no standalone law addressing AI. Much of the legislature is still reactive rather than preventative, so damage is only addressed once reported.

As AI continues to grow and become more sophisticated, it is vital to adapt and grow with it. Collaboration between legislators and companies is crucial moving forward to protect users while preserving genuine human interactions.

It is up to us to ensure we are in control of AI, not the other way around.

Advay is a Natural Sciences undergraduate at King’s College London. His writing critically examines emerging technologies and regulatory frameworks. He focuses on identifying gaps between technical realities and public narratives, supported by strong analytical skills and experience in research and interdisciplinary problem-solving. Advay’s work is defined by clarity, depth, and evidence-based reasoning, prioritising first principles over hype. He is especially interested in technologies with broad, systemic impacts and in challenging simplistic or exaggerated claims.

Latest

Comment

When Elon Musk’s xAI launched Grok as a new, interactive addition to X (formerly Twitter), it was marketed less as a neutral ChatGPT-style assistant...

News

Residents at Wolfson House were evacuated on 2 February after a local power cut caused damage to a transformer serving the residence around 7pm,...

Comment

Staff Writer and Broadcaster, Penelope Spencer-Simpson, attends and examines the King’s College London Politics Society ‘In Conversation’ with Kim Leadbeater, the MP for Spen...

Science & Technology

Each individual’s gut microbiome is a unique collection of trillions of microorganisms like bacteria and fungi that live in the gut, interacting with each...

News

This article was amended at 14:15 on 2 February to account for updated information and at 14:45 for the Police statement. The driver of...

Comment

When Elon Musk’s xAI launched Grok as a new, interactive addition to X (formerly Twitter), it was marketed less as a neutral ChatGPT-style assistant...

KCLSU & Societies

Following a series of Freedom of Information (FOI) requests, Roar can exclusively reveal that King’s College London has spent £35,013 on Microsoft Copilot licenses...

News

King’s College London (KCL) Vice Chancellor and President, Professor Shitji Kapur, voiced his concerns that a degree no longer guarantees a good job for...

Comment

Staff writer Annadonata Taccarelli explores the consequences of AI on the Graduate Job Market. You are using AI to apply. AI is screening your...